Historically, virtual reality (VR) with head-mounted displays (HMDs) is associated with computer-generated content and gaming applications. However, recent advances in 360 degree cameras facilitate omnidirectional capture of real-world environments to create content to be viewed on HMDs - a technology referred to as cinematic VR. This can be used to immerse the user, for instance, in a concert or sports event. The main focus of this talk will be on data representations for creating such immersive experiences.
In cinematic VR, videos are usually represented in a spherical format to account for all viewing directions. To achieve high-quality streaming of such videos to millions of users, it is crucial to consider efficient representations for this type of data, in order to maximize compression efficiency under resource constraints, such as the number of pixels and bitrate. We formulate the choice of representation as a multi-dimensional, multiple choice knapsack problem and show that the resulting representations adapt well to varying content.
Existing cinematic VR systems update the viewports according to head rotation, but do not support head translation or focus cues. We propose a new 3D video representation, referred to as depth augmented stereo panorama, to address this issue. We show that this representation can successfully induce head-motion parallax in a predefined operating range, as well as generate light fields across the observer's pupils, suitable for using with emerging light field HMDs.
Haricharan Lakshman is a Visiting Assistant Professor in the Electrical Engineering Department at Stanford University since Fall 2014. His research interests are broadly in Image Processing, Visual Computing and Communications. He received his PhD in Electrical Engineering from the Technical University of Berlin, Germany, in Jan 2014, while working as a Researcher in the Image Processing Group of Fraunhofer HHI. Between 2011-2012, he was a Visiting Researcher at Stanford. He was awarded the IEEE Communications Society MMTC Best Journal Paper Award in 2013, and was a finalist for the IEEE ICIP Best Student Paper Award in 2010 and 2012.