Keynote Speakers

We are happy to announce that SAP2021 keynote speakers are Samantha Wood and Michael Barnett-Cowan!

Samantha Wood

Building vision: Lessons from newborn animals

Artificial intelligence (AI) has made remarkable progress in the last decade. Yet, machine learning algorithms still struggle to perform many tasks that are basic for young children. To overcome this problem, AI researchers are increasingly drawing inspiration from the best general intelligence computer in existence: the brain. How do biological systems learn so rapidly and efficiently about the world? My work focuses on reverse engineering the learning mechanisms in newborn brains. I perform parallel controlled-rearing experiments on newborn animals and artificial agents. In this way, I can ensure that the animals and models receive the same set of training data prior to testing, allowing for direct comparison of their learning abilities. In this talk, I will review what perceptual abilities emerge with minimal visual experience, how early perceptual learning is constrained, and what these findings mean for building more biologically inspired AI.

Bio: Dr. Samantha Wood received her B.A. from Harvard University in Social Studies and her M.A. and Ph.D. from the University of Southern California in Psychology, with a focus on Brain and Cognitive Science. She is interested in understanding the origins of perception and cognition, using tools from developmental psychology, vision science, virtual reality (VR), and artificial intelligence. Her research uses VR-based controlled rearing to raise newborn animals in strictly controlled virtual worlds. By controlling all of the visual experiences (training data) available to newborn animals, VR-based controlled rearing can reveal the role of experience in the development of biological intelligence. Her ultimate goal is to use these “benchmarks” from newborn animals to develop artificial agents that have the same cognitive structures and learning mechanisms as newborn animals. She is currently a Research Scientist at the Building a Mind Lab and Adjunct Professor in the Informatics Department at Indiana University Bloomington.


Michael Barnett-Cowan

The need to accelerate content generation for virtual and augmented technologies

Virtual reality (VR) and augmented reality (AR) are interactive computer interfaces that immerse the user in a synthetic three-dimensional environment giving the user the illusion of being in that virtual setting in the case of VR or coexisting in physical and digital content in the case of AR. These technologies have rapidly grown in their accessibility to the general public and to researchers due to lower cost hardware and improved computer graphics. However, the true potential of these technologies is held back due to delays and costs associated with content generation. This has been ever more noticeable in the COVID-19 pandemic era where technology users have become more dependent on interacting in digital mediums. In this talk I will highlight a number of approaches we use in the multisensory brain and cognition lab to better understand the neural systems and processes that underlie multisensory integration in real and virtual environments. I will highlight the utility of using both commercially available virtual content as well as constructing virtual content with gaming engines for experimental purposes. I will also suggest that recent advances using machine learning have the potential to dramatically reduce the time required to create highly realistic virtual content. I will also discuss the need to form multidisciplinary teams and industrial partnerships with the games industry in order to accelerate the development of VR and AR technology that have the potential to form the third revolution of computing.

Bio: Dr. Michael Barnett-Cowan is an Associate Professor of Neuroscience in the Department of Kinesiology at the University of Waterloo where he is the Director of the Multisensory Brain & Cognition laboratory. Michael received his PhD in Experimental Psychology in 2009 at York University with Laurence Harris at the Centre for Vision Research. He then took up a postdoctoral fellowship at the Max Planck Institute for Biological Cybernetics in Tübingen, Germany with Heinrich Bülthoff where he led the Cybernetics Approach to Perception and Action (CAPA) research group and was project leader for the Simulation of Upset Recovery in Aviation (SUPRA) F7 EU Research Grant. In 2012 he returned to Canada to work with Jody Culham at Western University's Brain and Mind Institute where he held appointments as an adjunct research professor and a Banting fellow. Michael's research program uses psychophysical, computational modelling, genomic as well as neural imaging and stimulation techniques to assess how the normal, damaged, diseased, and older human brain integrates multisensory information that is ultimately used to guide perception, cognition and action.