In this talk I will describe our system for capturing, reconstructing, compressing, and rendering high quality immersive light field video. We record immersive light fields using a custom array of 46 time-synchronized cameras distributed on the surface of a hemispherical, 92cm diameter dome. From this data we produce 6DOF volumetric videos with a wide 80-cm viewing baseline, 10 pixels per degree angular resolution, and a wide field of view (>220 degrees), at 30fps video frame rates. Even though the cameras are placed 18cm apart on average, our system can reconstruct objects as close as 20cm to the camera rig. We accomplish this by leveraging the recently introduced DeepView view interpolation algorithm, replacing its underlying multi-plane image (MPI) scene representation with a collection of spherical shells which are better suited for representing panoramic light field content. We further process this data to reduce the large number of shell layers to a small, fixed number of RGBA+depth layers without significant loss in visual quality. The resulting RGB, alpha, and depth channels in these layers are then compressed using conventional texture atlasing and video compression techniques. The final, compressed representation is lightweight and can be rendered on mobile VR/AR platforms or in a web browser.
Dr. Michael Broxton - From satellites orbiting Mars and the Moon to microscopes peering into the brains of mice and zebrafish, Michael has worked on imaging and computer vision problems spanning the macrocosmos to the microcosmos. After working early in his career at Los Alamos National Lab, MIT, and NASA Ames Research Center, Michael returned to get his PhD at Stanford under Marc Levoy. There he discovered a deep interest in light fields, and has been researching them ever since. Michael joined Google in 2018 and has been working to develop new deep learning methods to solve light field imaging and view synthesis problems.