Stanford hosts International Conference on Computational Photography (ICCP)

June 2017

By Julie Chang, PhD candidate

The seventh annual IEEE International Conference on Computational Photography (ICCP) was hosted at Stanford University on May 12-14, 2017. Over 200 students, post-docs, professors, and entrepreneurs from around the world came together to discuss their research in this area. Professor Gordon Wetzstein from Stanford served as program chair alongside Laura Waller from UC Berkeley and Clem Karl from Boston University.

Wetzstein leads the Computational Imaging group at Stanford, which works on advancing the capabilities of camera and display technology through interdisciplinary research in applied math, optics, human perception, computing, and electronics. Active areas of research include virtual reality displays, advanced imaging systems, and optimization-based image processing. Wetzstein also teaches the popular Virtual Reality course (EE 267) as well as Computational Imaging and Displays (EE 367) and Digital Image Processing (EE 368). Several members of Wetzstein's lab presented their work at the conference. Isaac Kauvar (co-advised by Karl Deisseroth) and Julie Chang's paper on "Aperture interference and the volumetric resolution of light field fluorescence microscopy" was accepted for a talk. Posters and demos from Wetzstein's group included Nitish Padmanaban's provocatively titled project on "Making Virtual Reality Better Than Reality," Robert Konrad's spinning VR camera nicknamed "Vortex", and Felix Heide's domain-specific language "ProxImaL" for efficient image optimization.

ICCP 2017 was comprised of nine presentation sessions each with several accepted and invited talks organized around topics such as time-of-flight and computational illumination, image processing and optimization, computational microscopy, and turbulence and coherence. There was a mix of hardware and software projects for a wide variety of applications, ranging from gigapixel videos to seeing in the dark to photographic stenography. One keynote speaker was scheduled for each day. In Friday's keynote, Professor Karl Deisseroth (Stanford) discussed the importance of optical tools, namely optogenetics and advanced fluorescence microscopy, to help elucidate the inner working of the brain. The second keynote was given by Paul Debevec (USC/Google), who showed some of his team's work in computational relighting, both in Hollywood to make movies such as 'Gravity' possible, and in the White House to construct Barack Obama's presidential bust. The final keynote speaker was Professor Sabine Susstrunk (EPFL), who spoke on the non-depth-measurement uses of near-infrared imaging in computational photography.

The conference this year also included an industry panel on computational photography start-ups comprised of seasoned experts Rajiv Laroia of Light, Ren Ng of Lytro, Jingyi Yu, and Kartik Venkataraman of Pelican Imaging. Kari Pulli of Meta chaired a lively discussion covering the risks and thrills of startups, comparison with working at large companies, and the future of the computational photography industry.

The best paper award was received by Christian Reinbacher, Gottfried Munda, and Thomas Pock for their work on real-time panoramic tracking for event cameras. By popular vote, the best poster award was presented to Katie Bouman et al., for their work on "Turning Corners into Cameras," a method of seeing around corners by looking at the shadows produced at a wall corner, and the best demo award to Grace Kuo et al, for "DiffuserCam," which allows for imaging with a diffuser in place of a lens.