Image
SCIEN icon

3D Asset Generation using Neural Radiance Fields

Summary
Matthew Tancik (UC Berkeley)
Apr
13
Date(s)
Content

Talk Abstract:   Neural Radiance Fields (NeRFs) enable novel view synthesis of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. In the past two year these representations have received interest from the community due to their simplicity to implement and their high quality results. In this talk I will discuss the core concepts behind NeRF and dive into the details behind one specific technique that enables the networks to represent high frequency signals. Finally I will discuss a recent project where we scale up NeRFs to represent large scale scenes. Specifically we utilize data captured from autonomous vehicles to reconstruct a neighborhood in San Francisco.

 

Speaker Biography:   Matt Tancik is a PhD student at UC Berkeley advised by Ren Ng and Angjoo Kanazawa and is supported by the NSF graduate research fellowship program. He received his bachelor’s degree in CS and physics at MIT. He received a master’s degree in CS working on non-line-of-sight imaging while advised by Ramesh Raskar at MIT. His current research lies at the intersection of machine learning and graphics.

You must register in advance for this meeting:

https://stanford.zoom.us/meeting/register/tJwpc-GtqD4vHN1LtSDCi3ZZ5_SBS…;