Keynote Speakers

Douglas Lanman Douglas Lanman is the Senior Director of Display Systems Research at Meta’s Reality Labs Research, where he leads investigations into advanced display and imaging technologies. He is also an Affiliate Instructor at the University of Washington CSE Department, where he recently completed teaching a course on building VR headsets from scratch. His prior research has focused on head-mounted displays, glasses-free 3D displays, light field cameras, and active illumination for 3D reconstruction and interaction. He received a B.S. in Applied Physics with Honors from Caltech in 2002 and M.S. and Ph.D. degrees in Electrical Engineering from Brown University in 2006 and 2010, respectively. He was a Senior Research Scientist at Nvidia Research from 2012 to 2014, a Postdoctoral Associate at the MIT Media Lab from 2010 to 2012, and an Assistant Research Staff Member at MIT Lincoln Laboratory from 2002 to 2005.

Perceptually Driven Development of AR/VR Displays How can a display appear indistinguishable from reality? We describe how to pass this “visual Turing test” using AR/VR headsets, emphasizing the perceptually driven design of optics, display components, rendering algorithms, and sensing elements. Meta’s Display Systems Research team has pursued this line of investigation for nearly a decade, including the development of “retinal resolution” viewing optics, accommodation-supporting VR headsets, eye-tracked distortion correction, ultra-compact viewing optics, wide fields of view, high dynamic range, occlusion-capable AR, holographic displays, and perspective-correct mixed reality passthrough.

In this presentation we’ll detail how perception science has continually underpinned these developments. In some cases, past vision science literature has provided a starting foundation, offering clear requirements for proof-of-concept prototypes; in others, the literature is lacking, requiring this system-oriented team to undertake new studies in vision science. For SAP attendees, we hope to inspire more applied perception researchers to focus on AR/VR display systems, accelerating the field towards passing the visual Turing test.

Eero Simoncelli Eero Simoncelli is a Silver Professor of Neural Science, Mathematics, Psychology, and Data Science at New York University, and the Inaugural Scientific Director of the Center for Computational Neuroscience at the Flatiron Institute of the Simons Foundation. His research blends computational neuroscience, statistical signal processing, and perception, and is focused on the representation and analysis of naturally occuring sensory signals (both visual, and to a lesser extent, auditory).

Professor Simoncelli received a B.A. in physics, summa cum laude, from Harvard University, studied mathematics at Cambridge University for a year and a half, and earned his doctorate in electrical engineering and computer science in 1993, from MIT. He joined the faculty of the Computer and Information Science Department at the University of Pennsylvania. In 1996, he moved to NYU as part of the Sloan Center for Theoretical Visual Neuroscience. He received an NSF CAREER award in 1996, an Alfred P. Sloan Research Fellowship in 1998, and became an Investigator of the Howard Hughes Medical Institute in 2000. He was elected a Fellow of the IEEE in 2008, and an associate member of the Canadian institute for Advanced Research in 2010. He has received two Outstanding Faculty awards from the NYU GSAS Graduate Student Council (2003/2011), IEEE Best Journal Article awards in 2009 and 2010 and a Sustained Impact Paper award in 2016, an Emmy Award from the Academy of Television Arts and Sciences for his work on perceptual quality metrics in 2015, and the Golden Brain Award from the Minerva Foundation in 2017, for fundamental contributions to visual neuroscience. In 2019, he was elected as a member of the American Academy of Arts and Sciences.

Metric properties of visual representations Deep neural networks have demonstrated the remarkable potential of distributed cascaded computation with simple canonical elements. These systems were inspired by study of biological brains, and provide a substrate for their understanding. But biological systems have many additional properties, and although some of these are undoubtedly idiosyncrasies of their implementation, others are likely to provide fundamental computational capabilities. Specifically, biological neural circuits adapt their response levels over multiple time scales. They are also quite noisy. Both attributes affect the metric properties of stimulus representation - that is, the effective distances between encoded stimuli. I’ll describe some of our recent efforts to assess these in the context of biological visual representations, and their effect on perceptual capabilities.

Sponsors

ACM SIGGRAPH logo
 logo
 logo