Efficient Neural Scene Representation, Rendering, and Generation
Packard 202
Abstract: Neural radiance fields and scene representation networks offer unprecedented capabilities for photorealistic scene representation, view interpolation, and many other tasks. In this talk, we discuss expressive scene representation network architecture, efficient neural rendering approaches, and generalization strategies that allow us to generate photorealistic multi-view-consistent humans or cats using state-of-the-art 3D GANs and diffusion models.
Bio: Gordon Wetzstein is an Associate Professor of Electrical Engineering and, by courtesy, of Computer Science at Stanford University. He is the leader of the Stanford Computational Imaging Lab and a faculty co-director of the Stanford Center for Image Systems Engineering. At the intersection of computer graphics and vision, artificial intelligence, computational optics, and applied vision science, Prof. Wetzstein’s research has a wide range of applications in next-generation imaging, wearable computing, and neural rendering systems. Prof. Wetzstein is a Fellow of Optica and the recipient of numerous awards, including an NSF CAREER Award, an Alfred P. Sloan Fellowship, an ACM SIGGRAPH Significant New Researcher Award, a Presidential Early Career Award for Scientists and Engineers (PECASE), an SPIE Early Career Achievement Award, an Electronic Imaging Scientist of the Year Award, an Alain Fournier Ph.D. Dissertation Award as well as many Best Paper and Demo Awards.
This talk is hosted by the ISL Colloquium. To receive talk announcements, subscribe to the mailing list at isl-colloq@lists.stanford.edu.