Traditional lenses are optimized for 2D imaging, which prevents them from capturing extra dimensions of the incident light field (e.g. depth or high-speed dynamics) without multiple exposures or moving parts. Leveraging ideas from compressed sensing, I replace the lens of a traditional camera with a single pseudorandom free-form optic called a diffuser. The diffuser creates a pseudorandom point spread function which multiplexes these extra dimensions into a single 2D exposure taken with a standard sensor. The image is then recovered by solving a sparsity-constrained inverse problem. This lensless camera, dubbed DiffuserCam, is capable of snapshot 3D imaging at video rates, encoding a high-speed video (>4,500 fps) into a single rolling-shutter exposure, and video-rate 3D imaging of fluorescence signals, such as neurons, in a device weighing under 3 grams.
Nick Antipa is in his final year of PhD at UC Berkeley where he studies computational imaging with Laura Waller and Ren Ng. Nick studied optical engineering during his undergraduate time at UC Davis and earned his MS in Optics from University of Rochester. Between studying MS and PhD, he developed automated optical metrology systems for the National Ignition Facility at Lawrence Livermore National Lab.