EE Student Information

SCIEN Talk

SCIEN and EE292E presents "Recent developments in GatedVision Imaging -- seeing the unseen""

Topic: 
Recent developments in GatedVision Imaging -- seeing the unseen
Abstract / Description: 

Imaging is the basic building block for automotive autonomous driving. Any computer vision system will require a good image as an input at all driving conditions. GatedVision provides an extra layer on top of the regular RGB/RCCB sensor to augment it at night time and harsh weather conditions. GatedVision images in darkness and different weather conditions will be shared. Imagine that you could detect a small target laying on the road having the same reflectivity as the back ground meaning no contrast, GatedVision can manipulate the way an image is captured so that contrast can be extracted. Additional imaging capabilities of GatedVision will be presented.

Date and Time: 
Wednesday, September 22, 2021 - 10:00am

SCIEN Colloquium and EE 292E presents "Learning-based 3D Computer-Generated Holography"

Topic: 
Learning-based 3D Computer-Generated Holography
Abstract / Description: 

Computer generated holography (CGH) is fundamental to applications such as biosensing, volumetric display, optical/acoustic tweezer, security and many others that require spatial control of intricate optical or acoustic fields. For near-eye displays, CGH provides the opportunity to support true 3D projection in a sunglass-like display. Yet, the conventional approach to compute a true 3D hologram via physical simulation of diffraction and inference is slow and unaware of occlusion. Moreover, experimental results are often inferior to simulations due to non-idealized optical systems, non-linear and non-uniform SLM responses, and image degradation caused by complex to phase-only conversion. These computational and hardware-imposed challenges together limit the interactiveness and realism of the ultimate immersive experience. In this talk, I will describe techniques to mitigate these challenges, including physical simulation algorithms that handle occlusion for RGB-D and more advanced 3D input, methods to create large-scale 3D hologram datasets, training of CNNs to speed up complex and phase-only hologram synthesis, and approaches to compensate hardware limitations. Together, the resulted system can synthesis and display photorealistic 3D holograms in real-time using a single consumer-grade GPU and run interactively on an iPhone leveraging the Neural Engine. I will further discuss possible extensions that could be built top of the proposed system to support foveated rendering, static pupil expansion, view-dependent effect and other features.

Date and Time: 
Wednesday, June 2, 2021 - 4:30pm

SCIEN and EE292E presents "Real-Time Ray Tracing and the Reinvention of the Graphics Pipeline"

Topic: 
Real-Time Ray Tracing and the Reinvention of the Graphics Pipeline
Abstract / Description: 

For many years, real-time ray tracing was the technology of the future; in 2008, David Kirk famously quipped that it always would be. There were plenty of reasons to doubt that the approach would be suitable for real-time rendering, many of them firmly believed by the speaker. Yet dedicated hardware for ray tracing has now arrived in recent GPUs. Its greatest successes so far have come from not the direct application of existing offline ray-tracing algorithms to real-time, but instead from the reinvention of fundamental rendering algorithms accounting for the constraints of real-time rendering. In this talk, I will survey the history of real-time ray tracing and some of the near misses along the way. I'll then discuss how real-time rendering is changing with the capabilities offered by the high-performance and arbitrary visibility queries offered by ray tracing.

Date and Time: 
Wednesday, May 26, 2021 - 4:30pm

SCIEN and EE292E present "Mantis shrimp–inspired organic photodetector"

Topic: 
Mantis shrimp–inspired organic photodetector for simultaneous hyperspectral and polarimetric imaging-enabling advanced single-pixel architectures
Abstract / Description: 

Many spectral and polarimetric cameras implement complex spatial, temporal, and spectral re-mapping strategies to measure a signal within a given use-case's specifications and error tolerances. This re-mapping results in a complex tradespace that is challenging to navigate; a tradespace driven, in part, by the limited degrees of freedom available in inorganic detector technology. This presentation overviews a new kind of organic detector and pixel architecture that enables single-pixel tandem detection of both spectrum and polarization. By using organic detectors' semitransparency and intrinsic anisotropy, the detector minimizes spatial and temporal resolution tradeoffs while showcasing thin-film polarization control strategies.

Date and Time: 
Wednesday, May 19, 2021 - 4:30pm

SCIEN and EE292E present "Photographic Image Priors in the Era of Machine Learning"

Topic: 
Photographic Image Priors in the Era of Machine Learning
Abstract / Description: 

Prior probability models are a central component of the statistical formulation of inverse problems, but density estimation is a notoriously difficult problem for high dimensional signals such as photographic images. Machine learning methods have produced impressive solutions for many inverse problems, greatly surpassing those achievable with simple prior models, but these are often not well understood and don't generalize well beyond their training context. About a decade ago, a new approach known as "plug-and-play" was proposed, in which a denoiser is used as an algorithmic component for imposing prior information. I'll describe our progress in understanding and using this implicit prior. We derive a surprisingly simple algorithm for drawing high-probability samples from the implicit prior embedded within a CNN trained to perform blind (i.e., unknown noise level) least-squares Gaussian denoising. A generalization of this algorithm to constrained sampling provides a method for solving *any* linear inverse problem, with no additional training, and no further distributional assumptions. We demonstrate this general form of transfer learning in multiple applications, using the same algorithm to produce state-of-the-art solutions for deblurring, super-resolution, and compressive sensing. I'll also discuss extensions to visualizing information capture in foveated visual systems. This is joint work with Zahra Kadkhodaie, Sreyas Mohan, and Carlos Fernandez-Granda

Date and Time: 
Wednesday, May 12, 2021 - 4:30pm

SCIEN colloquium and EE292E seminar: Pushing the Boundaries of Novel View Synthesis

Topic: 
Pushing the Boundaries of Novel View Synthesis
Abstract / Description: 

2020 was a turbulent year, but for 3D learning it was a fruitful one with lots of exciting new tools and ideas. In particular, there have been many exciting developments in the area of coordinate based neural networks and novel view synthesis. In this talk I will discuss our recent work on single image view synthesis with pixelNeRF, which aims to predict a Neural Radiance Field (NeRF) from a single image. I will discuss how NeRF representation allows models like pixel-aligned implicit functions (PiFu) to be trained without explicit 3D supervision and the importance of other key design factors such as predicting in view coordinate-frame and handling multi-view inputs. I will also touch upon our recent work that allows real-time rendering of NeRFs. Then, I will discuss Infinite Nature, a project in collaboration with teams at Google NYC, where we explore how to push the boundaries of novel view synthesis and generate views way beyond the edges of the initial input image, resulting in a controllable video generation of a natural scene.

Date and Time: 
Wednesday, May 5, 2021 - 4:30pm

SCIEN and EE292E present "Neural Representations: Coordinate Based Networks for Fitting Signals, Derivatives, and Integrals"

Topic: 
Neural Representations: Coordinate Based Networks for Fitting Signals, Derivatives, and Integrals
Abstract / Description: 

Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations. However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal's spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations. In this talk, we describe how sinusoidal representation networks or SIREN, are ideally suited for representing complex natural signals and their derivatives. Using SIREN, we demonstrate the representation of images, wavefields, video, sound, and their derivatives. Further, we show how SIRENs can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations (yielding signed distance functions), the Poisson equation, and the Helmholtz and wave equations. While SIREN can be used to fit signals and their derivatives, we also introduce a new framework for solving integral equations using implicit neural representation networks. Our automatic integration framework, AutoInt, enables the calculation of any definite integral with two evaluations of a neural network. We apply our approach for efficient integration to the problem of neural volume rendering. Finally we present a novel architecture and training procedure able to fit data such as gigapixel images or fine-detailed 3D geometry, demonstrating those neural representations are now ready to be used in large scale scenarios.

Date and Time: 
Wednesday, April 28, 2021 - 4:30pm

SCIEN and EE292E present "The Chromatic Pyramid of Visibility"

Topic: 
The Chromatic Pyramid of Visibility
Abstract / Description: 

A fundamental limit to human vision is our ability to sense variations in light intensity over space and time. These limits have been formalized in the spatio-temporal contrast sensitivity function, which is now a foundation of vision science. This function has also proven to be the foundation of much applied vision science, providing guidance on spatial and temporal resolution for modern imaging technology. The Pyramid of Visibility is a simplified model of the human spatio-temporal luminance contrast sensitivity function (Watson, Andrew B.; Ahumada, Albert J. 2016). It posits that log sensitivity is a linear function of spatial frequency, temporal frequency, and log mean luminance. It is valid only away from the spatiotemporal frequency origin. It has recently been extended to peripheral vision to define the Field of Contrast Sensitivity (Watson 2018). Though very useful in a range of applications, the pyramid would benefit from an extension to the chromatic domain. In this talk I will describe our efforts to develop this extension. Among the issues we address are the choice of color space, the definition of color contrast, and how to combine sensitivities among luminance and chromatic pyramids.

Watson, A. B. (2018). "The Field of View, the Field of Resolution, and the Field of Contrast Sensitivity." Journal of Perceptual Imaging 1(1): 10505-10501-10505-10511.
Watson, A. B. and A. J. Ahumada (2016). "The pyramid of visibility." Electronic Imaging 2016(16): 1-6.

Date and Time: 
Wednesday, April 21, 2021 - 4:30pm

SCIEN and EE292E present "Neural Implicit Representations for 3D Vision"

Topic: 
Neural Implicit Representations for 3D Vision
Abstract / Description: 

In this talk, I will show several recent results of my group on learning neural implicit 3D representations, departing from the traditional paradigm of representing 3D shapes explicitly using voxels, point clouds or meshes. Implicit representations have a small memory footprint and allow for modeling arbitrary 3D topologies at (theoretically) arbitrary resolution in continuous function space. I will show the ability and limitations of these approaches in the context of reconstructing 3D geometry, texture and motion. I will further demonstrate a technique for learning implicit 3D models using only 2D supervision through implicit differentiation of the level set constraint. Finally, I will demonstrate how implicit models can tackle large-scale reconstructions and introduce GRAF and GIRAFFE which are generative 3D models for neural radiance fields that are able to generate 3D consistent photo-realistic renderings from unstructured and unposed image collections.

Date and Time: 
Wednesday, April 14, 2021 - 11:00am to Thursday, April 15, 2021 - 10:55am

SCIEN and EE292E present "Slow Glass"

Topic: 
Slow Glass
Abstract / Description: 

Wouldn't it be fascinating to be in the same room as Abraham Lincoln, visit Thomas Edison in his laboratory, or step onto the streets of New York a hundred years ago? We explore this thought experiment, by tracing ideas from science fiction through antique stereographs to the latest work in generative adversarial networks (GANs) to step back in time to experience these historical people and places not in black and white, but much closer to how they really appeared. In the process, I'll present our latest work on Keystone Depth, and Time Travel Rephotography.

Date and Time: 
Wednesday, April 7, 2021 - 4:30pm

Pages

Subscribe to RSS - SCIEN Talk