In this talk, I will show several recent results of my group on learning neural implicit 3D representations, departing from the traditional paradigm of representing 3D shapes explicitly using voxels, point clouds or meshes. Implicit representations have a small memory footprint and allow for modeling arbitrary 3D topologies at (theoretically) arbitrary resolution in continuous function space. I will show the ability and limitations of these approaches in the context of reconstructing 3D geometry, texture and motion. I will further demonstrate a technique for learning implicit 3D models using only 2D supervision through implicit differentiation of the level set constraint. Finally, I will demonstrate how implicit models can tackle large-scale reconstructions and introduce GRAF and GIRAFFE which are generative 3D models for neural radiance fields that are able to generate 3D consistent photo-realistic renderings from unstructured and unposed image collections.
Bio: Andreas Geiger is professor at the University of Tübingen and group leader at the Max Planck Institute for Intelligent Systems. Prior to this, he was a visiting professor at ETH Zürich and a research scientist at MPI-IS. He studied at KIT, EPFL and MIT and received his PhD degree in 2013 from the KIT. His research interests are at the intersection of 3D reconstruction, motion estimation, scene understanding and sensory-motor control. He maintains the KITTI vision benchmark and coordinates the ELLIS PhD and PostDoc program.