Light field microscopy is a rapid, scan-less volume imaging technique that requires only a standard wide field fluorescence microscope and a microlens array. Unlike scanning microscopes, which collect volumetric information over time, the light field microscope captures volumes synchronously in a single photographic exposure, and at speeds limited only by the frame rate of the image sensor. This is made possible by the microlens array, which focuses light onto the camera sensor so that each position in the volume is mapped onto the sensor as a unique light intensity pattern. These intensity patterns are the position-dependent point response functions of the light field microscope. With prior knowledge of these point response functions, it is possible to "decode" 3-D information from a raw light field image and computationally reconstruct a full volume. In this talk I present an optical model for light field microscopy based on wave optics that accurately models light field point response functions. I describe an algorithm that solves for volumes using a GPU-accelerated iterative algorithm, and discuss priors that are useful for reconstructing biological specimens. I then explore the diffraction limit that applies for light field microscopy, and how it gives rise to a position-dependent resolution limits for this microscope. I'll explain how these limits differ from more familiar resolution metrics commonly used in 3-D scanning microscopy, like the Rayleigh limit and the optical transfer function (OTF). Using this theory of resolution limits for the light field microscope, I explore new wavefront coding techniques that can modify the light field resolution limits and can address certain common reconstruction artifacts, at least to a degree. Certain resolution trade-offs exist that suggest that light field microscopy is just one of potentially many useful forms of computational microscopy. Finally, I describe our application of light field microscopy in neuroscience where we have used it to record calcium activity in populations of neurons within the brains of awake, behaving animals.
Michael Broxton grew up in Los Alamos, NM where he had his first exposure to scientific computing systems at Los Alamos National Laboratory, learning the value of shared scientific enterprise. He attended MIT, where he earned his Bachelors and Masters degrees in EE/CS, additionally doing research in the MIT Media Laboratory. Michael moved to California to work at NASA Ames Research Center for six years in robotics and computer vision, with a particular focus on building automated software pipelines for processing satellite imagery of Mars and the Moon. During that time he collaborated with Google to release Moon and Mars modes for Google Earth 5.0. He then entered a Stanford PhD program in Computer Science, transitioning from the macrocosmos to the microcosmos, developing theory and improved performance in light field microscopy and proving applications in neuroscience. Recently graduated, his PhD research is the subject of this talk. Michael now works as a research scientist at Google.