Imaging Anytime Anywhere All at Once: Capturing Dynamic Scenes from Seconds to Picoseconds
Packard 101
Talk Abstract: In this talk, I discuss the problem of imaging a dynamic scene over an extreme range of timescales simultaneously—from seconds to picoseconds—and doing so passively, without much light, and without synchronization to a light source. We address this problem with new techniques for flux estimation based on picosecond-accurate timestamps of photon arrivals and tools from stochastic calculus. We experimentally demonstrate new capabilities including passive, non-line-of-sight video acquisition and recording ultra-wideband video, which can be played back later at 30 Hz to show everyday motions—but can also be played a billion times slower to show the propagation of light itself. Additionally, I discuss recent work that combines ultrafast imaging with neural rendering techniques to directly visualize propagating light and complex light transport effects, such as scattering, reflection, refraction, and diffraction from novel viewpoints.
Speaker Biography: David Lindell is an Assistant Professor in the Department of Computer Science at the University of Toronto. His research combines optics, emerging sensor platforms, machine learning, and physics-based algorithms to enable new capabilities in visual computing. Prior to joining the University of Toronto, he received his Ph.D. from Stanford University. He is a recipient of the 2021 ACM SIGGRAPH Outstanding Dissertation Honorable Mention Award and the 2023 Marr Prize.