SCIEN Colloquium and EE 292E present "How to Learn a Camera”

Topic: 
Light Fields: From Shape Recovery to Sparse Reconstruction
Wednesday, December 4, 2019 - 4:30pm
Venue: 
Packard 101
Speaker: 
Dr. Jon Barron (Google)
Abstract / Description: 

Traditionally, the image processing pipelines of consumer cameras have been carefully designed, hand-engineered systems. But treating an imaging pipeline as something to be learned instead of something to be engineered has the potential benefits of being faster, more accurate, and easier to tune. Relying on learning in this fashion presents a number of challenges, such as fidelity, fairness, and data collection, which can be addressed through careful consideration of neural network architectures as they relate to the physics of image formation. In this talk I'll be presenting recent work from Google's computational photography research team on using machine learning to replace traditional building blocks of a camera pipeline. I will present learning based solutions for the classic tasks of denoising, white balance, and tone mapping, each of which uses a bespoke ML architecture that is designed around the specific constraints and demands of each task. By designing learning-based solutions around the structure provided by optics and camera hardware, we are able to produce state-of-the-art solutions to these three tasks in terms of both accuracy and speed.

Bio:

Jon Barron is a staff research scientist at Google, where he works on computer vision and machine learning. He received a PhD in Computer Science from the University of California, Berkeley in 2013, where he was advised by Jitendra Malik, and he received a Honours BSc in Computer Science from the University of Toronto in 2007. He received a National Science Foundation Graduate Research Fellowship in 2009, the C.V. Ramamoorthy Distinguished Research Award in 2013, and the ECCV Best Paper Honorable Mention in 2016.