The availability of academic and commercial light field camera systems has spurred significant research into the use of light fields and multi-view imagery in computer vision and computer graphics. In this talk, we discuss our results over the past few years, focusing on a few themes. First, we describe our work on a unified formulation of shape from light field cameras, combining cues such as defocus, correspondence, and shading. Then, we go beyond photoconsistency, addressing non-Lambertian objects, occlusions, and an SVBRDF-invariant shape recovery algorithm. Finally, we show that advances in machine learning can be used to interpolate light fields from very sparse angular samples, in the limit a single 2D image, and create light field videos from sparse temporal samples. We also discuss recent work on combining machine learning with plenoptic sampling theory to create virtual explorations of real scenes from a very sparse set of input images captured on a handheld mobile phone.