The final program of SAP 2012 is now available.
Keynote presentations at SAP 2012 will be given by Prof. Roberta Klatzky and Prof. Paul Debevec.
08:00-09:00 Registration, Poster Set-Up, Continental Breakfast
09:00-09:15 Opening Remarks
09:15-10:25 Session 1: Virtual Environments 1: Distance
10:25-11:00 Coffee Break
11:00-12:15 Session 2: Faces, Behaviour & Animation
14:00-15:00 Keynote 1: Roberta Klatzky
15:00-15:30 Posters Fast Forward
15:30-16:45 Coffee Break + Posters Session
16:45-18:00 Session 3: Tone Mapping & Rendering
Saturday, 4th August 2012
09:00-10:00 Keynote 2: Paul Debevec
10:10-11:00 Session 4: Gaze in Art
11:00-11:30 Coffee Break + Graphics Lab Tour
11:30-12:40 Session 5: Virtual Environments 2: Movement
14:00-15:15 Session 6: Eyes & Gaze
15:15-15:45 Coffee Break
15:45-17:10 Session 7: Displays & Stereo
17:10-18:00 Business Meeting
Title: The Basis for Action is Perception: Natural, Augmented, or Virtual
Abstract: Voluntary actions are directed by perceptual processing, which comprises a complex chain of computations originating in sensory channels. Attendees at this conference are well aware that control of the sensory input leads to control of the percept and, ultimately, the motor response. There are many choices for how sensory signals should be manipulated, however, and basic research in perception and action can provide guiding principles for implementation. In this talk I will provide examples from my own research, representing extremes of force control. In one case, a bimanual force-feedback system enables tele-operation of a robot digging in sand. In a second case, visual augmented reality enhances the penetration of human tissue. Both scenarios are grounded in an understanding of human perception in relation to action.
Roberta Klatzky is Professor of Psychology at Carnegie Mellon University, where she is also on the faculty of the Center for the Neural Basis of Cognition and the Human-Computer Interaction Institute. She received a B.S. in mathematics from the University of Michigan and a Ph.D. in cognitive psychology from Stanford University. She is the author of over 200 articles and chapters, and she has authored or edited 6 books. Her research investigates perception, spatial thinking and action from the perspective of multiple modalities, sensory and symbolic, in real and virtual environments. Klatzky's basic research has been applied to tele-manipulation, image-guided surgery, navigation aids for the blind, and neural rehabilitation. Klatzky is a fellow of the American Association for the Advancement of Science, the American Psychological Association, and the Association for Psychological Science, and a member of the Society of Experimental Psychologists (honorary). For her work on perception and action, she received an Alexander von Humboldt Research Award and the Kurt Koffka Medaille from Justus-Liebig-University of Giessen, Germany. Her professional service includes governance roles in several societies and membership on the National Research Council's Committees on International Psychology, Human Factors, and Techniques for Enhancing Human Performance. She has served on research review panels for the National Institutes of Health, the National Science Foundation, and the European Commission. She has been a member of many editorial boards and is currently an associate editor of ACM Transactions in Applied Perception and IEEE Transactions on Haptics.
Title: Crossing the Uncanny Valley: Achieving Photoreal Digital Actors
Abstract: Somewhere between "Final Fantasy" in 2001 and "The Curious Case of Benjamin Button" in 2008, digital actors crossed the "Uncanny Valley" from looking strangely synthetic to believably real. This talk describes some of the technological advances that have enabled this achievement. For an in-depth example, the talk describes how high-resolution face scanning, advanced character rigging, and performance-driven facial animation were combined to create "Digital Emily", a collaboration between the USC ICT Graphics Laboratory and Image Metrics. Actress Emily O'Brien was scanned in Light Stage 5 in 33 facial poses at the resolution of skin pores and fine wrinkles. These scans were assembled into a rigged face model driven by Image Metrics' video-based animation software, and the resulting photoreal facial animation premiered at SIGGRAPH 2008. The talk also presents techniques which may allow digital characters to leap from the movie screen and into the space around us, including a 3D teleconferencing system that uses live facial scanning and an autostereoscopic display to transmit a person's face in 3D and make eye contact with remote collaborators.
Paul Debevec is a Research Professor at the University of Southern California and the Associate Director of Graphics Research at USC's Institute for Creative Technologies. His work has focused on image-based modeling and rendering techniques beginning with his 1996 Ph.D. thesis at UC Berkeley, with specializations in architecture, high dynamic range lighting, and human facial capture. He serves as the Vice President of ACM SIGGRAPH and recently received an Academy Award® for his work on the Light Stage facial capture systems.
Last modified: 29 March 2012