
How the models of spatio-chromatic vision can help us design better technology
Packard 101
Talk Abstract: Imaging and display systems should be optimized for best visual quality rather than for the smallest error in pixel values. Yet, the latter approach still dominates. In this talk, I will explain two visual models that could help optimize visual quality: our new contrast sensitivity function for achromatic and chromatic spatiotemporal patterns, castleCSF, and our new colour video quality metric, ColorVideoVDP. castleCSF is a comprehensive model of spatio-temporal-chromatic contrast sensitivity that accounts for chromaticity, area, spatial and temporal frequency, luminance, and eccentricity. ColorVideoVDP is a differentiable image and video quality metric that models human color and spatiotemporal vision. Both can be used as differentiable loss functions that ignore the information that is invisible to the human eye.
Speaker Biography: Rafał K. Mantiuk is a Professor of Graphics and Displays at the Department of Computer Science and Technology, University of Cambridge(UK). He received Ph.D. from the Max-Planck Institute for ComputerScience (Germany). His recent interests focus on computational displays, rendering and imaging algorithms that adapt to human visual performance and deliver the best image quality given limited resources, such as computation time or bandwidth. He contributed to early work on high dynamic range imaging, including quality metrics (HDR-VDP), video compression and tone-mapping. More recently, he led an ERC-funded project on the capture and display system that passed the visual Turing test – 3D objects were reproduced with fidelity that made them undistinguishable from their real counterparts.
Further details: http://www.cl.cam.ac.uk/~rkm38/