SCIEN Talk: Perceptual Modeling with Multimodal Sensing

Perceptual Modeling with Multimodal Sensing
Wednesday, November 28, 2018 - 4:30pm
Packard 101
Dr. Petr Kellnhofer (MIT)
Abstract / Description: 

The research of human perception has enabled many visual applications in computer graphics that efficiently utilize computation resources to deliver a high quality experience within the limitations of the hardware. Beyond vision, humans perceive their surrounding using variety of senses to build a mental model of the world and act upon it. This mental image is often incomplete or incorrect which may have safety implications. As we cannot directly see inside the head, we need to read indirect signals projected outside. In the first part of the talk I will show how perceptual modeling can be used to overcome and exploit limitations of one specific human sense - the vision. Then, I will describe how we can build sensors to observe other human interactions connected first with physical touch and then with eye gaze patterns. Finally, I will outline how such readings can be used to teach computers to understand human behavior, to predict and to provide assistance or safety.


Dr. Petr Kellnhofer has completed his PhD at Max-Planck Institute for Informatics in Germany under supervision of Prof. Hans-Peter Seidel and Prof. Karol Myszkowski. His thesis on perceptual modeling of human vision for stereoscopy was awarded the Eurographics PhD award. After the graduation he has been a postdoc in the group of Prof. Wojciech Matusik at MIT CSAIL and he has been working on topics related to human sensing such as eye tracking. Dr. Kellnhofer's current research interest is a combination of perceptual modeling and machine learning in order to utilize data gathered from various types of sensors and to learn about human perception and higher level behavior.