SCIEN Talk

SCIEN and EE292E present "Self-supervised Scene Representation Learning"

Topic: 
Self-supervised Scene Representation Learning
Abstract / Description: 

Unsupervised learning with generative models has the potential of discovering rich representations of 3D scenes. Such Neural Scene Representations may subsequently support a wide variety of downstream tasks, ranging from robotics to computer graphics to medical imaging. However, existing methods ignore one of the most fundamental properties of scenes: their three-dimensional structure. In this talk, I will make the case for equipping Neural Scene Representations with an inductive bias for 3D structure, enabling self-supervised discovery of shape and appearance from few observations. By embedding an implicit scene representation in a neural rendering framework and learning a prior over these representations, I will show how we can enable 3D reconstruction from only a single posed 2D image. I will show how the features we learn in this process are already useful to the downstream task of semantic segmentation. I will then show how gradient-based meta-learning can enable fast inference of implicit representations.

Date and Time: 
Wednesday, January 20, 2021 - 4:30pm

SCIEN and EE292E present "World’s Deepest-Penetration and Fastest Optical Cameras

Topic: 
World’s Deepest-Penetration and Fastest Optical Cameras: Photoacoustic Tomography and Compressed Ultrafast Photography
Abstract / Description: 

We developed photoacoustic tomography to peer deep into biological tissue. Photoacoustic tomography (PAT) provides in vivo omniscale functional, metabolic, molecular, and histologic imaging across the scales of organelles through organisms. We also developed compressed ultrafast photography (CUP) to record 10 trillion frames per second in real time, orders of magnitude faster than commercially available camera technologies. CUP can capture the fastest phenomenon in the universe, namely, light propagation, at light speed and can be slowed down for slower phenomena such as combustion.


Registration is required to attend. The talks will be presented via Zoom, and you will receive a Zoom meeting URL when you register for the presentation.

Date and Time: 
Wednesday, March 17, 2021 - 4:30pm

SCIEN and EE292E present "A look towards the future of computational optical microscopy"

Topic: 
A look towards the future of computational optical microscopy
Abstract / Description: 

Optical computational imaging seeks enhanced performance and new functionality by the joint design of illumination, unconventional optics, detectors, and reconstruction algorithms. Among the emergent approaches in this field, two remarkable examples enable overcoming the diffraction limit and imaging through complex media.
Abbe's resolution limit has been overcome enabling unprecedented opportunities for optical imaging at the nanoscale. Fluorescence imaging using photoactivatable or photoswitchable molecules within computational optical systems offers single molecule sensitivity within a wide field of view. The advent of three-dimensional point spread function engineering associated with optimal reconstruction algorithms provides a unique approach to further increase resolution in three dimensions.
Focusing and imaging through strongly scattering media has also been accomplished recently in the optical regime. By using a feedback system and optical modulation, the resulting wavefronts overcome the effects of multiple scattering upon propagation through the medium. Phase-control holographic techniques help characterize scattering media at high-speed using micro-electro-mechanical technology, allowing focusing through a temporally dynamic, strongly scattering sample, or a multimode fiber. In this talk we will further discuss implications for ultrathin optical endoscopy and adaptive nonlinear wavefront shaping.


Registration is required to attend. The talks will be presented via Zoom, and you will receive a Zoom meeting URL when you register for the presentation.

Date and Time: 
Wednesday, March 3, 2021 - 4:30pm

SCIEN and EE292E present " Skydio Autonomy: Research in Robust Visual Navigation and Real-Time 3D Reconstruction"

Topic: 
Skydio Autonomy: Research in Robust Visual Navigation and Real-Time 3D Reconstruction
Abstract / Description: 

Skydio is the leading US drone company and the world leader in autonomous flight. Our drones are used for everything from capturing amazing video, to inspecting bridges, to tracking progress on construction sites.

At the core of our products is a vision-based autonomy system with seven years of development at Skydio, drawing on decades of academic research. This system pushes the state of the art in deep learning, geometric computer vision, motion planning, and control with a particular focus on real-world robustness.

Drones encounter extreme visual scenarios not typically considered by academia nor encountered by cars, ground robots, or AR applications. They are commonly flown in scenes with few or no semantic priors and must deftly navigate thin objects, extreme lighting, camera artifacts, motion blur, textureless surfaces, and water. These challenges are daunting for classical vision because photometric signals are simply not consistent, and for learning-based methods because there is no ground truth for direct supervision of deep networks. In this talk we'll take a detailed look at our approaches to these problems.

We will also discuss new capabilities on top of our core navigation engine to autonomously map complex scenes and build high quality digital twins, by performing real-time 3D reconstruction across multiple flights. Our vision-based 3D Scan approach allows anyone to build millimeter-scale maps of the world.


Registration is required to attend. The talks will be presented via Zoom, and you will receive a Zoom meeting URL when you register for the presentation.

Date and Time: 
Wednesday, February 24, 2021 - 4:30pm

SCIEN and EE292E present "The Plenoptic Camera"

Topic: 
The Plenoptic Camera
Abstract / Description: 

Imagine a futuristic version of Google Street View that could dial up any possible place in the world, at any possible time. Effectively, such a service would be a recording of the plenoptic function—the hypothetical function described by Adelson and Bergen that captures all light rays passing through space at all times. While the plenoptic function is completely impractical to capture in its totality, every photo ever taken represents a sample of this function. I will present recent methods we've developed to reconstruct the plenoptic function from sparse space-time samples of photos—including Street View itself, as well as tourist photos of famous landmarks. The results of this work include the ability to take a single photo and synthesize a full dawn-to-dusk timelapse video, as well as compelling 4D view synthesis capabilities where a scene can simultaneously be explored in space and time.


Registration is required to attend. The talks will be presented via Zoom, and you will receive a Zoom meeting URL when you register for the presentation.

Date and Time: 
Wednesday, February 17, 2021 - 4:30pm

SCIEN and EE292E present “Neural Holography: Incorporating Optics and Artificial Intelligence for Next-generation Computer-generated Holographic Displays”

Topic: 
Neural Holography: Incorporating Optics and Artificial Intelligence for Next-generation Computer-generated Holographic Displays
Abstract / Description: 

Holographic displays promise unprecedented capabilities for direct-view displays as well as virtual and augmented reality applications. However, one of the biggest challenges for computer-generated holography (CGH) is the fundamental tradeoff between algorithm runtime and achieved image quality. Moreover, the image quality achieved by most holographic displays is low, due to the mismatch between the optical wave propagation of the display and its simulated model. We develop an algorithmic CGH framework that achieves unprecedented image fidelity and real-time framerates. Our framework comprises several parts, including a novel camera-in-the-loop optimization strategy that allows us to either optimize a hologram directly or train an interpretable model of the optical wave propagation and a neural network architecture that represents the first CGH algorithm capable of generating full-color high-quality holographic images at FHD resolution in real-time. Based on this framework, we further propose a holographic display architecture using two SLMs, where the camera-in-the-loop optimization with an automated calibration procedure is applied. As such, both diffracted and undiffracted light on the target plane are acquired to update hologram patterns on SLMs simultaneously. The experimental demonstration delivers higher contrast and less noisy holographic images without the need for extra filtering, compared to conventional single SLM-based systems. In summary, we envision that bringing artificial intelligence advances into conventional optics/photonics research opens many opportunities to both communities and is promising to enable high fidelity imaging and display solutions.


Registration is required to attend. The talks will be presented via Zoom, and you will receive a Zoom meeting URL when you register for the presentation.

Date and Time: 
Wednesday, February 10, 2021 - 4:30pm

SCIEN and EE292E present “Computational Imaging: Reconciling Models and Learning”

Topic: 
Computational Imaging: Reconciling Models and Learning
Abstract / Description: 

There is a growing need in biological, medical, and materials imaging research to recover information lost during data acquisition. There are currently two distinct viewpoints on addressing such information loss: model-based and learning-based. Model-based methods leverage analytical signal properties (such as sparsity) and often come with theoretical guarantees and insights. Learning-based methods leverage flexible representations (such as convolutional neural nets) for best empirical performance through training on big datasets. The goal of this talk is to introduce a Regularization by Artifact Removal (RARE) framework that reconciles both viewpoints by providing the "deep learning prior" counterpart of the classical regularized inversion. This is achieved by specifying "artifact-removing deep neural nets" as a mechanism to infuse learned priors into recovery problems, while maintaining a clear separation between the prior and physics-based acquisition models. Our methodology can fully leverage the flexibility offered by deep learning by designing learned prior to be used within our new family of fast iterative algorithms. Yet, our results indicate that the such algorithms can achieve state-of-the-art performance in different computational imaging tasks, while also being amenable to rigorous theoretical analysis. We will focus on the application of the methodology to the problem to various biomedical imaging modalities, such as magnetic resonance imaging and intensity diffraction tomography.


Registration is required to attend. The talks will be presented via Zoom, and you will receive a Zoom meeting URL when you register for the presentation.

Date and Time: 
Wednesday, February 3, 2021 - 4:30pm

SCIEN and EE292E present "Recent advances and current challenges of graphics for fully immersive augmented and virtual reality"

Topic: 
Recent advances and current challenges of graphics for fully immersive augmented and virtual reality
Abstract / Description: 

TBA


Registration is required to attend. The talks will be presented via Zoom, and you will receive a Zoom meeting URL when you register for the presentation.

 

Date and Time: 
Wednesday, January 27, 2021 - 4:30pm

SCIEN and EE292E present "Holographic optics for AR/VR"

Topic: 
Holographic optics for AR/VR
Abstract / Description: 

Holographic optics are an exciting tool to increase the performance and reduce the size and weight of augmented and virtual reality displays. In this talk, I will describe two types of holographic optics that can be applied to AR/VR, as recently outlined in two ACM SIGGRAPH publications. In the first part, I will describe how static holographic optics can be used to replace conventional optical elements, such as refractive lenses, to enable highly compact, sunglasses-like virtual reality displays while retaining high performance. In the second part, I'll describe the potential of dynamic holography to replace the conventional image formation process and enable compact and high performance augmented reality displays. In particular, I will focus on a key challenge of dynamic holographic displays, limited etendue, and present a candidate solution to increase etendue through the co-design of a simple scattering mask and hologram optimization.

Date and Time: 
Wednesday, January 13, 2021 - 4:30pm
Venue: 
Zoom registration required

Pages

Subscribe to RSS - SCIEN Talk