APGV06 Logo
SAP12 HomepageImportant DatesCall for ParticipationFormatting Guidelines for SubmissionConference ProgramOrganizing Committees

The final program of SAP 2012 is now available.

Keynote presentations at SAP 2012 will be given by Prof. Roberta Klatzky and Prof. Paul Debevec.

Friday, 3rd August 2012

08:00-09:00 Registration, Poster Set-Up, Continental Breakfast
09:00-09:15 Opening Remarks
09:15-10:25 Session 1: Virtual Environments 1: Distance
  • Stepping over and ducking under: The influence of an avatar on locomotion in an HMD-based immersive virtual environment
    Lin Qiufeng, John Rieser, Bobby Bodenheimer
  • Improvements in Visually Directed Walking in Virtual Environments Cannot be Explained by Changes in Gait Alone
    J. Adam Jones, J. Edward Swan II, Gurjot Singh, Sujan Reddy, Kenneth Moser, Chunya Hua, Stephen R. Ellis
  • To Move or Not to Move: Can Active Control and User-Driven Motion Cueing Enhance Self-Motion Perception ("Vection") in Virtual Reality?
    Bernhard E. Riecke, Daniel Feuereissen
10:25-11:00 Coffee Break
11:00-12:15 Session 2: Faces, Behaviour & Animation
  • Which Facial Profile Do Humans Expect After Seeing a Front View? - A Comparison with a Linear Face Model
    Matthaeus Schumacher, Volker Blanz
  • Online Simulation of Emotional Interactive Behaviors with Hierarchical Gaussian Process Dynamical Models
    Nick Taubert, Andrea Christensen, Dominik Endres, Martin A. Giese
  • How Responsiveness Affects Players' Perception in Digital Games
    Sophie Jörg, Aline Normoyle, Alla Safonova
12:15-14:00 Lunch
14:00-15:00 Keynote 1: Roberta Klatzky
15:00-15:30 Posters Fast Forward
15:30-16:45 Coffee Break + Posters Session
16:45-18:00 Session 3: Tone Mapping & Rendering
  • Single Trial EEG Classification of Artifacts in Videos
    Maryam Mustafa, Stefan Guthe, Marcus Magnor
  • Dynamic Range Compression by Differential Zone Mapping Based on Psychophysical Experiments
    Francesco Banterle, Alessandro Artusi, Elena Sikudova, Thomas Bashford-Rogers, Patrick Ledda, Marina Bloj, Alan Chalmers
  • Playing with Puffball: Simple Scale-Invariant Inflation for Use in Vision and Graphics
    Nathaniel R. Twarog, Marshall F. Tappen, Edward H. Adelson
19:00 Reception

Saturday, 4th August 2012

08:30-09:00 Coffee
09:00-10:00 Keynote 2: Paul Debevec
10:10-11:00 Session 4: Gaze in Art
  • Inferring Artistic Intention in Comic Art through Viewer Gaze
    Eakta Jain, Yaser Sheikh, Jessica Hodgins
  • Directing Gaze in Narrative Art
    Ann McNamara, Stephen Caffey, Thomas Booth, Cindy Grimm, Srinivas Sridharan, Reynold Bailey
11:00-11:30 Coffee Break + Graphics Lab Tour
11:30-12:40 Session 5: Virtual Environments 2: Movement
  • Minification affects verbal and action-based distance judgments differently in head-mounted displays
    Ruimin Zhang, Anthony Nordman, James Walker, Scott A. Kuhl
  • Effects of Calibration to Visual and Haptic Feedback on Near-Field Depth Perception in an Immersive Virtual Environment
    Bliss M. Altenhoff, Phillip E. Napieralski, Lindsay O. Long, Jeffrey W. Bertrand, Christopher C. Pagano, Sabarish V. Babu, Timothy A. Davis
  • Evaluating the Accuracy of Size Perception in Real and Virtual Environments
    Jeanine K. Stefanucci, David A. Lessard, Michael N. Geuss, Sarah H. Creem-Regehr, William B. Thompson
12:40-14:00 Lunch
14:00-15:15 Session 6: Eyes & Gaze
  • Visual saliency and emotional salience influence eye movements
    Yaqing Niu, Rebecca M. Todd, Matthew Kyan, Adam K. Anderson
  • Multimodal Learning with Audio Description: An Eye Tracking Study of Children's Gaze During a Visual Recognition Task
    Krzysztof Krejtz, Izabela Krejtz, Andrew Duchowski, Agnieszka Szarkowska, Agnieszka Walczak
  • Apparent Resolution Enhancement for Motion Videos
    Floraine Berthouzoz, Raanan Fattal
15:15-15:45 Coffee Break
15:45-17:10 Session 7: Displays & Stereo
  • Kinect Based 3D Object Manipulation on a Desktop Display
    Mukund Raj, Sarah H. Creem-Regehr, Kristina M. Rand, Jeanine K. Stefanucci, William B. Thompson
  • Perception of blending in stereo motion panoramas
    Vincent Couture, Michael S. Langer, Sebastien Roy
  • Mobile Projectors versus Mobile Displays: an Assessment of Task Performance
    Tania Pouli, Sriram Subramanian
  • Analyzing Effects of Geometric Rendering Parameters on Size and Distance Estimation in On-Axis Stereographics
    Gerd Bruder, Andreas Pusch, Frank Steinicke
17:10-18:00 Business Meeting

Keynote - Prof. Roberta Klatzky

Title: The Basis for Action is Perception: Natural, Augmented, or Virtual

Abstract: Voluntary actions are directed by perceptual processing, which comprises a complex chain of computations originating in sensory channels. Attendees at this conference are well aware that control of the sensory input leads to control of the percept and, ultimately, the motor response. There are many choices for how sensory signals should be manipulated, however, and basic research in perception and action can provide guiding principles for implementation. In this talk I will provide examples from my own research, representing extremes of force control. In one case, a bimanual force-feedback system enables tele-operation of a robot digging in sand. In a second case, visual augmented reality enhances the penetration of human tissue. Both scenarios are grounded in an understanding of human perception in relation to action.

Roberta Klatzky is Professor of Psychology at Carnegie Mellon University, where she is also on the faculty of the Center for the Neural Basis of Cognition and the Human-Computer Interaction Institute. She received a B.S. in mathematics from the University of Michigan and a Ph.D. in cognitive psychology from Stanford University. She is the author of over 200 articles and chapters, and she has authored or edited 6 books. Her research investigates perception, spatial thinking and action from the perspective of multiple modalities, sensory and symbolic, in real and virtual environments. Klatzky's basic research has been applied to tele-manipulation, image-guided surgery, navigation aids for the blind, and neural rehabilitation. Klatzky is a fellow of the American Association for the Advancement of Science, the American Psychological Association, and the Association for Psychological Science, and a member of the Society of Experimental Psychologists (honorary). For her work on perception and action, she received an Alexander von Humboldt Research Award and the Kurt Koffka Medaille from Justus-Liebig-University of Giessen, Germany. Her professional service includes governance roles in several societies and membership on the National Research Council's Committees on International Psychology, Human Factors, and Techniques for Enhancing Human Performance. She has served on research review panels for the National Institutes of Health, the National Science Foundation, and the European Commission. She has been a member of many editorial boards and is currently an associate editor of ACM Transactions in Applied Perception and IEEE Transactions on Haptics.

Keynote - Prof. Paul Debevec

Title: Crossing the Uncanny Valley: Achieving Photoreal Digital Actors

Abstract: Somewhere between "Final Fantasy" in 2001 and "The Curious Case of Benjamin Button" in 2008, digital actors crossed the "Uncanny Valley" from looking strangely synthetic to believably real. This talk describes some of the technological advances that have enabled this achievement. For an in-depth example, the talk describes how high-resolution face scanning, advanced character rigging, and performance-driven facial animation were combined to create "Digital Emily", a collaboration between the USC ICT Graphics Laboratory and Image Metrics. Actress Emily O'Brien was scanned in Light Stage 5 in 33 facial poses at the resolution of skin pores and fine wrinkles. These scans were assembled into a rigged face model driven by Image Metrics' video-based animation software, and the resulting photoreal facial animation premiered at SIGGRAPH 2008. The talk also presents techniques which may allow digital characters to leap from the movie screen and into the space around us, including a 3D teleconferencing system that uses live facial scanning and an autostereoscopic display to transmit a person's face in 3D and make eye contact with remote collaborators.

Paul Debevec is a Research Professor at the University of Southern California and the Associate Director of Graphics Research at USC's Institute for Creative Technologies. His work has focused on image-based modeling and rendering techniques beginning with his 1996 Ph.D. thesis at UC Berkeley, with specializations in architecture, high dynamic range lighting, and human facial capture. He serves as the Vice President of ACM SIGGRAPH and recently received an Academy Award® for his work on the Light Stage facial capture systems.

Conference Management System Quick Facts Sponsor Disney Research Sponsor World Viz  


Last modified: 29 March 2012