SCIEN Colloquium and EE 292E: Mobile Computational Photography with Ultrawide and Telephoto Lens
Talk Abstract: The rapid proliferation of hybrid zoom cameras, i.e. a main camera equipped with auxiliary cameras, such as ultrawide, telephoto camera, or both, on smartphone cameras have brought new challenges and opportunities to mobile computational photography. Existing algorithms in this domain focus on the single main camera and give little attention to auxiliary cameras. In this talk, I will share how Google Pixel uses computational photography and ML to resolve lens distortion common on ultrawide lenses, and achieve motion deblur and super-resolution using dual camera fusion from hybrid-optics systems. We will also discuss open questions, industrial trends, and research ideas in mobile computational photography.
Speaker Biography: Yichang Shih is a Senior Staff Software Engineer at Google. He joined Google in 2017, and now leads a team developing computational photography and ML algorithms for Google Pixel Camera features. Prior to joining Google, he was a research scientist at Light.Co since 2015, working on multi-camera fusion over 16 cameras on mobile devices. Yichang’s research interests span across computational photography, computer vision, and machine learning, focusing on imaging and video enhancements. He received PhD in Computer Science from MIT CSAIL in 2015, under the supervision of Professor Fredo Durand and Bill Freeman. Prior to PhD, he received his Bachelor of Electrical Engineering from National Taiwan University in 2009.