EE Student Information

SCIEN Talk

SCIEN presents Practical 2D to 3D Image Conversion

Topic: 
Practical 2D to 3D Image Conversion
Abstract / Description: 

We will discuss techniques for converting 2D images to 3D meshes on mobile devices. This includes methods to efficiently compute both dense and sparse depth maps, converting depth maps into 3D meshes, mesh inpainting, and post-processing. We focus on different CNN designs to solve each step in the processing pipeline and examine common failure modes. Finally, we will look at practical deployment of image processing algorithms and CNNs on mobile apps, and how Lucid uses cloud processing to balance processing power with latency.

Date and Time: 
Wednesday, May 13, 2020 - 4:30pm
Venue: 
Zoom (join SCIEN mail list to receive meeting ID)

SCIEN Seminar presents "Bio-inspired depth sensing using computational optics"

Topic: 
Bio-inspired depth sensing using computational optics
Abstract / Description: 

Jumping spiders rely on accurate depth perception for predation and navigation. They accomplish depth perception, despite their tiny brains, by using specialized optics. Each principal eye includes a multitiered retina that simultaneously receives multiple images with different amounts of defocus, and distance is decoded from these images with seemingly little computation. In this talk, I will introduce two depth sensors that are inspired by jumping spiders. They use computational optics and build upon previous depth-from-defocus algorithms in computer vision. Both sensors operate without active illumination, and they are both monocular and computationally efficient.
The first sensor synchronizes an oscillating deformable lens with a photosensor. It produces depth and confidence maps at more than 100 frames per second and has the advantage of being able to extend its working range through optical accommodation. The second sensor uses a custom-designed metalens, which is an ultra-thin device with 2D nano-structures that modulate traversing light. The metalens splits incoming light and simultaneously forms two differently-defocused images on a planar photosensor, allowing the efficient computation of depth and confidence from a single snapshot in time.

Date and Time: 
Wednesday, May 6, 2020 - 4:30pm
Venue: 
Zoom (join SCIEN mail list to receive meeting ID)

SCIEN Seminar presents "Insight into the inner workings of Intel’s Stereo and Lidar Depth Cameras"

Topic: 
Insight into the inner workings of Intel’s Stereo and Lidar Depth Cameras
Abstract / Description: 

This talk will provide an overview of the technology and capabilities of Intel's RealSense Stereo and Lidar Depth Cameras, and will then progress to describe new features, such as high-speed capture, multi-camera enhancements, optical filtering, and near-range high-resolution depth imaging. Finally, we will introduce a new fast on-chip calibration method that can be used to improve the performance of a stereo camera and help mitigate some common stereo artifacts.

Date and Time: 
Wednesday, April 29, 2020 - 4:30pm
Venue: 
Zoom (join SCIEN mail list to receive meeting ID)

SCIEN Seminar presents "The Extreme Science of Building High-Performing Compact Cameras for Space Applications"

Topic: 
The Extreme Science of Building High-Performing Compact Cameras for Space Applications
Abstract / Description: 

A thickening flock of earth-observing satellites blankets the planet. Over 700 were launched during the past 10 years, and more than 2,200 additional ones are scheduled to go up within the next 10 years. To add to that, year on year, satellite platforms and instruments are being miniaturized to improve cost-efficiency with the same expectations of high spatial and spectral resolutions. But what does it take to build imaging systems that are high-performing in the harsh environment of space but cost-efficient and compact at the same time? This talk will touch upon the technical issues associated with the design, fabrication, and characterisation of such extremely high-performing but still compact and cost-efficient space cameras taking the example of the imager that Pixxel has built as part of its earth-imaging satellite constellations plans.

Date and Time: 
Wednesday, April 22, 2020 - 4:30pm
Venue: 
Zoom (join SCIEN mail list to receive meeting ID)

SCIEN Seminar presents "The Role of Fundamental Limits in 3D Imaging Systems: From Looking around Corners to Fast 3D Cameras"

Topic: 
The Role of Fundamental Limits in 3D Imaging Systems: From Looking around Corners to Fast 3D Cameras
Abstract / Description: 

The knowledge about limits is a precious commodity in computational imaging: By knowing that our imaging device already operates at the physical limit (e.g. of resolution), we can avoid unnecessary investments in better hardware, such as faster detectors, better optics or cameras with higher pixel resolution. Moreover, limits often appear as uncertainty products, making it possible to bargain with nature for a better measurement by sacrificing less important information.

In this talk, the role of physical and information limits in computational imaging will be discussed using examples from two of my recent projects: 'Synthetic Wavelength Holography' and the 'Single-Shot 3D Movie Camera'.

Synthetic Wavelength Holography is a novel method to image hidden objects around corners and through scattering media. While other approaches rely on time-of-flight detectors, which suffer from technical limitations in spatial and temporal resolution, Synthetic Wavelength Holography works at the physical limit of the space-bandwidth product. Full field measurements of hidden objects around corners or through scatterers reaching sub-mm resolution will be presented.

The single-shot 3D movie camera is a highly precise 3D sensor for the measurement of fast macroscopic live scenes. From each 1 Mpix camera frame, the sensor delivers 300,000 independent 3D points with high resolution. The single-shot ability allows for a continuous 3D measurement of fast moving or deforming objects, resulting in a continuous 3D movie. Like a hologram, each movie-frame encompasses the full 3D information about the object surface, and the observation perspective can be varied while watching the 3D movie.

Date and Time: 
Wednesday, April 15, 2020 - 4:30pm
Venue: 
Zoom (join SCIEN mail list to receive meeting ID)

SCIEN Seminar presents "Image recovery with untrained convolutional neural networks"

Topic: 
Image recovery with untrained convolutional neural networks
Abstract / Description: 

Convolutional Neural Networks are highly successful tools for image recovery and restoration. A major contributing factor to this success is that convolutional networks impose strong prior assumptions about natural images—so strong that they enable image recovery without any training data. A surprising observation that highlights those prior assumptions is that one can remove noise from a corrupted natural image by simply fitting (via gradient descent) a randomly initialized, over-parameterized convolutional generator to the noisy image.
In this talk, we discuss a simple un-trained convolutional network, called the deep decoder, that provably enables image denoising and regularization of inverse problems such as compressive sensing with excellent performance. We formally characterize the dynamics of fitting this convolutional network to a noisy signal and to an under-sampled signal, and show that in both cases early-stopped gradient descent provably recovers the clean signal. Finally, we discuss our own numerical results and numerical results from another group demonstrating that un-trained convolutional networks enable magnetic resonance imaging from highly under-sampled measurements.

Date and Time: 
Wednesday, April 8, 2020 - 4:30pm
Venue: 
Zoom (join SCIEN mail list to receive meeting ID)

SCIEN and EE292E present "An Integrated 6DoF Video Camera and System Design"

Topic: 
An Integrated 6DoF Video Camera and System Design
Abstract / Description: 

Designing a fully integrated 360◦ video camera supporting 6DoF head motion parallax requires overcoming many technical hurdles, including camera placement, optical design, sensor resolution, system calibration, real-time video capture, depth reconstruction, and real-time novel view synthesis. While there is a large body of work describing various system components, such as multi-view depth estimation, our paper is the first to describe a complete, reproducible system that considers the challenges arising when designing, building, and deploying a full end-to-end 6DoF video camera and playback environment. Our system includes a computational imaging software pipeline supporting online markerless calibration, high-quality reconstruction, and real-time streaming and rendering. Most of our exposition is based on a professional 16-camera configuration, which will be commercially available to film producers. However, our software pipeline is generic and can handle a variety of camera geometries and configurations. The entire calibration and reconstruction software pipeline along with example datasets is open sourced to encourage follow-up research in high-quality 6DoF video reconstruction and rendering.

More information: An Integrated 6DoF Video Camera and Systems Design

Open source repository: https://github.com/facebook/facebook360_dep

Date and Time: 
Wednesday, March 4, 2020 - 4:30pm
Venue: 
Packard 101

SCIEN and EE292E present "Deep Learning for Practical and Robust View Synthesis"

Topic: 
Deep Learning for Practical and Robust View Synthesis
Abstract / Description: 

I will present recent work ("Local Light Field Fusion") on a practical and robust deep learning solution for capturing and rendering novel views of complex real world scenes for virtual exploration. Our view synthesis algorithm operates on an irregular grid of sampled views, first expanding each sampled view into a local light field via a multiplane image (MPI) scene representation, then rendering novel views by blending adjacent local light fields. We extend traditional plenoptic sampling theory to derive a bound that specifies precisely how densely users should sample views of a given scene when using our algorithm. In practice, we can apply this bound to capture and render views of real world scenes that achieve the perceptual quality of Nyquist rate view sampling while using up to 4000x fewer views.

Date and Time: 
Wednesday, February 26, 2020 - 4:30pm
Venue: 
Packard 101

SCIEN and EE292E present "A fundamentally new sensing approach for high-level autonomous driving"

Topic: 
A fundamentally new sensing approach for high-level autonomous driving
Abstract / Description: 

It has been fifteen years since Stanford won the DARPA Grand Challenge. Since then, a new race is underway in the automotive industry to fulfill the ultimate mobility dream: driverless vehicles for all. Trying to elevate test track vehicles to automotive grade products has been a humbling experience for everyone. The mainstream approach has relied primarily on brute force for both hardware and software. Sensors in particular have become overly complex and unscalable in response to escalating hardware requirements. To reverse this trend, Perceptive has been developing a fundamentally new sensing platform for fully autonomous vehicles. Based on Digital Remote Imaging and Edge AI, the platform shifts the complexity to the software and scales with the compute. This talk will discuss the system architecture and underlying physics.

Date and Time: 
Wednesday, February 19, 2020 - 4:30pm
Venue: 
Packard 101

SCIEN and EE292E present "Beyond lenses: Computational imaging with a light-modulating mask"

Topic: 
Beyond lenses: Computational imaging with a light-modulating mask
Abstract / Description: 

The lens has long been a central element of cameras, its role to refract light to achieve a one-to-one mapping between a point in the scene and a point on the sensor. We propose a radical departure from this practice and the limitations it imposes. In this talk I will discuss our recent efforts to build extremely thin imaging devices by replacing the lens in a conventional camera with a light-modulating mask and computational reconstruction algorithms. These lensless cameras can be less than a millimeter in thickness and enable applications where size, weight, thickness, or cost are the driving factors.

Date and Time: 
Wednesday, February 12, 2020 - 4:30pm
Venue: 
Packard 101

Pages

Subscribe to RSS - SCIEN Talk