Program overview

Day 1 - September 16th

Time (CEST) Event Location
11:00Welcome and opening remarks Livestorm Morning 1
11:15Session 1: Haptics
12:15Session 2: Hands
14:00Session 3: Human factorsLivestorm Afternoon 1
14:50Keynote: Samantha Wood
15:50PostersMozilla Hubs
16:50Social

Day 2 - September 17th

Time (CEST) Event Location
11:00Opening remarksLivestorm Morning 2
11:10Session 4: Visual perception
11:55Session 5: Virtual humans
14:00Session 6: Spatial judmentsLivestorm Afternoon 2
16:20Keynote: Michael Barnett-Cowan
17:20Award and closing remarks





Full program


Day 1 - September 16th

Time (CEST) Event Duration Title
11:00  Welcome and opening remarks 15  
11:15 Session 1: Haptics (Chair: Chris Wallraven)    
11:15 20 Exploring the Effects of Actuator Configuration and Visual Stimuli on Cutaneous Rabbit Illusions in Virtual Reality. Mie Egeberg, Stine Lind, Niels C. Nilsson and Stefania Serafin
11:35 20 DOLPHIN: A Framework for the Design and Perceptual Evaluation of Ultrasound Mid-Air Haptic Stimuli. Lendy Mulot, Guillaume Gicquel, Quentin Zanini, William Frier, Maud Marchal, Claudio Pacchierotti and Thomas Howard
11:55 20 An Evaluation of Screen Parallax, Haptic Feedback and Sensor-Motor Mismatch on Near-Field Perception-Action in VR. David Brickler and Sabarish Babu (TAP)
12:15 Break 10  
12:25 Session 2: Hands (Chair: Ferran Argelaguet)    
12:25 20 Evaluating Grasping Interactions in a VR Game. Alex Adkins, Lorraine Lin, Aline Normoyle, Ryan Canales, Yuting Ye and Sophie Joerg (TAP)
12:45 20 Evaluating Study Design and Strategies for Mitigating the Impact of Hand Tracking Loss. Ylva Ferstl, Rachel McDonnell and Michael Neff
13:05 Lunch break 60  
14:05 Session 3: Human Factors (Chair: Rick Skarbez)    
14:05 20 Impact of communication delay and temporal sensitivity on perceived workload and teleoperation performance. Eishi Kim, Vsevolod Peysakhovich and Raphaëlle N. Roy 
14:25 15 Restorative Effects of Visual and Pictorial Spaces After Stress Induction in Virtual Reality. Siavash Eftekharifar, Anne Thaler and Nikolaus F. Troje
14:40 Break 10  
14:50 Keynote 60 Samantha Wood - Building vision: Lessons from newborn animals
15:50 Poster fast forward 15  
16:05 Poster session 45  
16:50 Social event 45  
17:35 End    

Day 2 - September 17th

Time (CEST) Event Duration Title
11:00  Opening remarks 10  
11:10 Session 4: Visual perception (Chair: Ann Mcnamara)    
11:10 20 Stealth Updates of Visual Information by Leveraging Change Blindness and Computational Visual Morphing. Shunichi Kasahara and Kazuma Takada (TAP)
11:30 15 Relationship between Dwell-Time and Model Human Processor for Dwell-based Image Selection. Toshiya Isomoto, Shota Yamanaka and Buntarou Shizuki
11:45 Break 10  
11:55 Session 5: Virtual humans (Chair: Ludovic Hoyet)    
11:55 15 Facial feature manipulation for trait portrayal in realistic and cartoon-rendered characters. Ylva Ferstl, Michael McKay and Rachel McDonnell (TAP)
12:10 15 Ascending from the valley: Can state-of-the-art photorealism avoid the uncanny? Darragh Higgins, Donal Egan, Rebecca Fribourg, Benjamin Cowan and Rachel McDonnell
12:25 20 Do Prosody and Embodiment Influence the Perceived Naturalness of Conversational Agents' Speech? Jonathan Ehret, Andrea Bönsch,  Lukas Aspöck, Christine T. Röhr , Stefan Baumann, Martine Grice, Janina Felsand Torsten W. Kuhlen (TAP)
12:45 15 Perception of Human Motion Similarity Based on Laban Movement Analysis. Funda Durupinar
13:00 Lunch break 60  
14:00 Session 6: Spatial judgments (Chair: Victoria Interrante)    
14:00 20 Structure Perception in 3D Point Clouds. Kenny Gruchalla, Sunand Raghupathi and Nicholas Brunhart-Lupo
14:20 20 Spatial Judgments in Impossible Spaces Preserve Important Relative Information. Andrew Robb and Catherine Barwulor
14:40 20 Virtual Room Re-Creation: A New Measure of Room Size Perception.  Holly Gagnon, Sarah Creem-Regehr and Jeanine Stefanucci
15:00 Break 10  
15:10 20 The Perception of Affordances in Mobile Augmented Reality. Yu Zhao, Jeanine Stefanucci, Sarah H. Creem-Regehr and Bobby Bodenheimer
15:30 20 Using Audio Reverberation to Compensate Distance Compression in Virtual Reality. Rohith Venkatakrishnan, Roshan Venkatakrishnan, Sabarish V. Babu and Wen-Chieh Lin
15:50 Break 30  
16:20 Keynote 60 Michael Barnett-Cowan: The need to accelerate content generation for virtual and augmented technologies
17:20 Awards and closing remarks 20  
17:40 End    


Keynote Speakers


Samantha Wood Dr. Samantha Wood received her B.A. from Harvard University in Social Studies and her M.A. and Ph.D. from the University of Southern California in Psychology, with a focus on Brain and Cognitive Science. She is interested in understanding the origins of perception and cognition, using tools from developmental psychology, vision science, virtual reality (VR), and artificial intelligence. Her research uses VR-based controlled rearing to raise newborn animals in strictly controlled virtual worlds. By controlling all of the visual experiences (training data) available to newborn animals, VR-based controlled rearing can reveal the role of experience in the development of biological intelligence. Her ultimate goal is to use these “benchmarks” from newborn animals to develop artificial agents that have the same cognitive structures and learning mechanisms as newborn animals. She is currently a Research Scientist at the Building a Mind Lab and Adjunct Professor in the Informatics Department at Indiana University Bloomington.

Building vision: Lessons from newborn animals

Artificial intelligence (AI) has made remarkable progress in the last decade. Yet, machine learning algorithms still struggle to perform many tasks that are basic for young children. To overcome this problem, AI researchers are increasingly drawing inspiration from the best general intelligence computer in existence: the brain. How do biological systems learn so rapidly and efficiently about the world? My work focuses on reverse engineering the learning mechanisms in newborn brains. I perform parallel controlled-rearing experiments on newborn animals and artificial agents. In this way, I can ensure that the animals and models receive the same set of training data prior to testing, allowing for direct comparison of their learning abilities. In this talk, I will review what perceptual abilities emerge with minimal visual experience, how early perceptual learning is constrained, and what these findings mean for building more biologically inspired AI.


Michael Barnett-Cowan Dr. Michael Barnett-Cowan is an Associate Professor of Neuroscience in the Department of Kinesiology at the University of Waterloo where he is the Director of the Multisensory Brain & Cognition laboratory. Michael received his PhD in Experimental Psychology in 2009 at York University with Laurence Harris at the Centre for Vision Research. He then took up a postdoctoral fellowship at the Max Planck Institute for Biological Cybernetics in Tübingen, Germany with Heinrich Bülthoff where he led the Cybernetics Approach to Perception and Action (CAPA) research group and was project leader for the Simulation of Upset Recovery in Aviation (SUPRA) F7 EU Research Grant. In 2012 he returned to Canada to work with Jody Culham at Western University's Brain and Mind Institute where he held appointments as an adjunct research professor and a Banting fellow. Michael's research program uses psychophysical, computational modelling, genomic as well as neural imaging and stimulation techniques to assess how the normal, damaged, diseased, and older human brain integrates multisensory information that is ultimately used to guide perception, cognition and action.

The need to accelerate content generation for virtual and augmented technologies

Virtual reality (VR) and augmented reality (AR) are interactive computer interfaces that immerse the user in a synthetic three-dimensional environment giving the user the illusion of being in that virtual setting in the case of VR or coexisting in physical and digital content in the case of AR. These technologies have rapidly grown in their accessibility to the general public and to researchers due to lower cost hardware and improved computer graphics. However, the true potential of these technologies is held back due to delays and costs associated with content generation. This has been ever more noticeable in the COVID-19 pandemic era where technology users have become more dependent on interacting in digital mediums. In this talk I will highlight a number of approaches we use in the multisensory brain and cognition lab to better understand the neural systems and processes that underlie multisensory integration in real and virtual environments. I will highlight the utility of using both commercially available virtual content as well as constructing virtual content with gaming engines for experimental purposes. I will also suggest that recent advances using machine learning have the potential to dramatically reduce the time required to create highly realistic virtual content. I will also discuss the need to form multidisciplinary teams and industrial partnerships with the games industry in order to accelerate the development of VR and AR technology that have the potential to form the third revolution of computing.




Posters


POSTER 1 - Vasiliki Myrodia

Measured and predicted visual fixations in a Monte Carlo rendering noise detection task. Vasiliki Myrodia, Jérôme Buisine, Samuel Delepoulle, Christophe Renaud and Laurent Madelain


POSTER 2 - Hugo Brument

Preliminary Results of the Impact of Translational and Rotational Motion on the Perception of Rotation Gains in VR. Hugo Brument, Anne-Hélène Olivier, Maud Marchal and Ferran Argelaguet


POSTER 3 - Shu Wei

Do People Matter? Presence and Prosocial Behaviour Towards Computer Agent Vs Human Avatar in Virtual Reality. Shu Wei and Anna Bramwell-Dicks


POSTER 4 - Benjamin Niay

Perception of speed adjusted walking motions generated from Inverse Kinematics. Benjamin Niay, Anne-Hélène Olivier, Katja Zibrek Julien Pettré and Ludovic Hoyet


POSTER 5 - Min Susan Li

The Effect of Skin Hydration on Roughness Perception. Min Susan Li and Di Luca Massimiliano


POSTER 6 - Davide Deflorio

Computational models of tactile neurons of the finger pad: a review. Davide Deflorio, Di Luca Massimiliano and Wing A.M.