EE Student Information

The Department of Electrical Engineering supports Black Lives Matter. Read more.

• • • • •

EE Student Information, Spring & Summer Quarters 19-20: FAQs and Updated EE Course List.

Updates will be posted on this page, as well as emailed to the EE student mail list.

Please see Stanford University Health Alerts for course and travel updates.

As always, use your best judgement and consider your own and others' well-being at all times.

SCIEN Talk

SCIEN Seminar presents "Insight into the inner workings of Intel’s Stereo and Lidar Depth Cameras"

Topic: 
Insight into the inner workings of Intel’s Stereo and Lidar Depth Cameras
Abstract / Description: 

This talk will provide an overview of the technology and capabilities of Intel's RealSense Stereo and Lidar Depth Cameras, and will then progress to describe new features, such as high-speed capture, multi-camera enhancements, optical filtering, and near-range high-resolution depth imaging. Finally, we will introduce a new fast on-chip calibration method that can be used to improve the performance of a stereo camera and help mitigate some common stereo artifacts.

Date and Time: 
Wednesday, April 29, 2020 - 4:30pm
Venue: 
Zoom (join SCIEN mail list to receive meeting ID)

SCIEN Seminar presents "The Extreme Science of Building High-Performing Compact Cameras for Space Applications"

Topic: 
The Extreme Science of Building High-Performing Compact Cameras for Space Applications
Abstract / Description: 

A thickening flock of earth-observing satellites blankets the planet. Over 700 were launched during the past 10 years, and more than 2,200 additional ones are scheduled to go up within the next 10 years. To add to that, year on year, satellite platforms and instruments are being miniaturized to improve cost-efficiency with the same expectations of high spatial and spectral resolutions. But what does it take to build imaging systems that are high-performing in the harsh environment of space but cost-efficient and compact at the same time? This talk will touch upon the technical issues associated with the design, fabrication, and characterisation of such extremely high-performing but still compact and cost-efficient space cameras taking the example of the imager that Pixxel has built as part of its earth-imaging satellite constellations plans.

Date and Time: 
Wednesday, April 22, 2020 - 4:30pm
Venue: 
Zoom (join SCIEN mail list to receive meeting ID)

SCIEN Seminar presents "The Role of Fundamental Limits in 3D Imaging Systems: From Looking around Corners to Fast 3D Cameras"

Topic: 
The Role of Fundamental Limits in 3D Imaging Systems: From Looking around Corners to Fast 3D Cameras
Abstract / Description: 

The knowledge about limits is a precious commodity in computational imaging: By knowing that our imaging device already operates at the physical limit (e.g. of resolution), we can avoid unnecessary investments in better hardware, such as faster detectors, better optics or cameras with higher pixel resolution. Moreover, limits often appear as uncertainty products, making it possible to bargain with nature for a better measurement by sacrificing less important information.

In this talk, the role of physical and information limits in computational imaging will be discussed using examples from two of my recent projects: 'Synthetic Wavelength Holography' and the 'Single-Shot 3D Movie Camera'.

Synthetic Wavelength Holography is a novel method to image hidden objects around corners and through scattering media. While other approaches rely on time-of-flight detectors, which suffer from technical limitations in spatial and temporal resolution, Synthetic Wavelength Holography works at the physical limit of the space-bandwidth product. Full field measurements of hidden objects around corners or through scatterers reaching sub-mm resolution will be presented.

The single-shot 3D movie camera is a highly precise 3D sensor for the measurement of fast macroscopic live scenes. From each 1 Mpix camera frame, the sensor delivers 300,000 independent 3D points with high resolution. The single-shot ability allows for a continuous 3D measurement of fast moving or deforming objects, resulting in a continuous 3D movie. Like a hologram, each movie-frame encompasses the full 3D information about the object surface, and the observation perspective can be varied while watching the 3D movie.

Date and Time: 
Wednesday, April 15, 2020 - 4:30pm
Venue: 
Zoom (join SCIEN mail list to receive meeting ID)

SCIEN Seminar presents "Image recovery with untrained convolutional neural networks"

Topic: 
Image recovery with untrained convolutional neural networks
Abstract / Description: 

Convolutional Neural Networks are highly successful tools for image recovery and restoration. A major contributing factor to this success is that convolutional networks impose strong prior assumptions about natural images—so strong that they enable image recovery without any training data. A surprising observation that highlights those prior assumptions is that one can remove noise from a corrupted natural image by simply fitting (via gradient descent) a randomly initialized, over-parameterized convolutional generator to the noisy image.
In this talk, we discuss a simple un-trained convolutional network, called the deep decoder, that provably enables image denoising and regularization of inverse problems such as compressive sensing with excellent performance. We formally characterize the dynamics of fitting this convolutional network to a noisy signal and to an under-sampled signal, and show that in both cases early-stopped gradient descent provably recovers the clean signal. Finally, we discuss our own numerical results and numerical results from another group demonstrating that un-trained convolutional networks enable magnetic resonance imaging from highly under-sampled measurements.

Date and Time: 
Wednesday, April 8, 2020 - 4:30pm
Venue: 
Zoom (join SCIEN mail list to receive meeting ID)

SCIEN and EE292E present "An Integrated 6DoF Video Camera and System Design"

Topic: 
An Integrated 6DoF Video Camera and System Design
Abstract / Description: 

Designing a fully integrated 360◦ video camera supporting 6DoF head motion parallax requires overcoming many technical hurdles, including camera placement, optical design, sensor resolution, system calibration, real-time video capture, depth reconstruction, and real-time novel view synthesis. While there is a large body of work describing various system components, such as multi-view depth estimation, our paper is the first to describe a complete, reproducible system that considers the challenges arising when designing, building, and deploying a full end-to-end 6DoF video camera and playback environment. Our system includes a computational imaging software pipeline supporting online markerless calibration, high-quality reconstruction, and real-time streaming and rendering. Most of our exposition is based on a professional 16-camera configuration, which will be commercially available to film producers. However, our software pipeline is generic and can handle a variety of camera geometries and configurations. The entire calibration and reconstruction software pipeline along with example datasets is open sourced to encourage follow-up research in high-quality 6DoF video reconstruction and rendering.

More information: An Integrated 6DoF Video Camera and Systems Design

Open source repository: https://github.com/facebook/facebook360_dep

Date and Time: 
Wednesday, March 4, 2020 - 4:30pm
Venue: 
Packard 101

SCIEN and EE292E present "Deep Learning for Practical and Robust View Synthesis"

Topic: 
Deep Learning for Practical and Robust View Synthesis
Abstract / Description: 

I will present recent work ("Local Light Field Fusion") on a practical and robust deep learning solution for capturing and rendering novel views of complex real world scenes for virtual exploration. Our view synthesis algorithm operates on an irregular grid of sampled views, first expanding each sampled view into a local light field via a multiplane image (MPI) scene representation, then rendering novel views by blending adjacent local light fields. We extend traditional plenoptic sampling theory to derive a bound that specifies precisely how densely users should sample views of a given scene when using our algorithm. In practice, we can apply this bound to capture and render views of real world scenes that achieve the perceptual quality of Nyquist rate view sampling while using up to 4000x fewer views.

Date and Time: 
Wednesday, February 26, 2020 - 4:30pm
Venue: 
Packard 101

SCIEN and EE292E present "A fundamentally new sensing approach for high-level autonomous driving"

Topic: 
A fundamentally new sensing approach for high-level autonomous driving
Abstract / Description: 

It has been fifteen years since Stanford won the DARPA Grand Challenge. Since then, a new race is underway in the automotive industry to fulfill the ultimate mobility dream: driverless vehicles for all. Trying to elevate test track vehicles to automotive grade products has been a humbling experience for everyone. The mainstream approach has relied primarily on brute force for both hardware and software. Sensors in particular have become overly complex and unscalable in response to escalating hardware requirements. To reverse this trend, Perceptive has been developing a fundamentally new sensing platform for fully autonomous vehicles. Based on Digital Remote Imaging and Edge AI, the platform shifts the complexity to the software and scales with the compute. This talk will discuss the system architecture and underlying physics.

Date and Time: 
Wednesday, February 19, 2020 - 4:30pm
Venue: 
Packard 101

SCIEN and EE292E present "Beyond lenses: Computational imaging with a light-modulating mask"

Topic: 
Beyond lenses: Computational imaging with a light-modulating mask
Abstract / Description: 

The lens has long been a central element of cameras, its role to refract light to achieve a one-to-one mapping between a point in the scene and a point on the sensor. We propose a radical departure from this practice and the limitations it imposes. In this talk I will discuss our recent efforts to build extremely thin imaging devices by replacing the lens in a conventional camera with a light-modulating mask and computational reconstruction algorithms. These lensless cameras can be less than a millimeter in thickness and enable applications where size, weight, thickness, or cost are the driving factors.

Date and Time: 
Wednesday, February 12, 2020 - 4:30pm
Venue: 
Packard 101

SCIEN and EE292E presents Towards Immersive Telepresence: Stereoscopic 360-degree Vision in Realtime

Topic: 
Towards Immersive Telepresence: Stereoscopic 360-degree Vision in Realtime
Abstract / Description: 

The technological advances in immersive telepresence are greatly impeded by the challenges that need to be met when mediating the realistic feeling of presence in a remote environment to a local human user. Providing a stereoscopic 360° visual representation of the distant scene further fosters the level of realism and greatly improves task performance. State-of-the-art technology has primarily developed catadioptric or multi-camera systems to address this issue. Current solutions are bulky, not realtime capable, and tend to produce erroneous image content due to the stitching processes involved, which are prone to perform poorly for texture-less scenes. In this talk, I will introduce a vision on-demand approach that creates stereoscopic scene information upon request. A real-time capable camera system along with a novel deep learning-based delay-compensation paradigm will be presented that provides instant visual feedback for highly immersive telepresence.

Date and Time: 
Wednesday, February 5, 2020 - 4:30pm
Venue: 
Packard 101

SCIEN and EE292E present "DiffuserCam: Lenseless single-exposure 3D imaging"

Topic: 
DiffuserCam: Lenseless single-exposure 3D imaging
Abstract / Description: 

Traditional lenses are optimized for 2D imaging, which prevents them from capturing extra dimensions of the incident light field (e.g. depth or high-speed dynamics) without multiple exposures or moving parts. Leveraging ideas from compressed sensing, I replace the lens of a traditional camera with a single pseudorandom free-form optic called a diffuser. The diffuser creates a pseudorandom point spread function which multiplexes these extra dimensions into a single 2D exposure taken with a standard sensor. The image is then recovered by solving a sparsity-constrained inverse problem. This lensless camera, dubbed DiffuserCam, is capable of snapshot 3D imaging at video rates, encoding a high-speed video (>4,500 fps) into a single rolling-shutter exposure, and video-rate 3D imaging of fluorescence signals, such as neurons, in a device weighing under 3 grams.

Date and Time: 
Wednesday, January 22, 2020 - 4:30pm
Venue: 
Packard 101

Pages

Subscribe to RSS - SCIEN Talk