EE Student Information

The Department of Electrical Engineering supports Black Lives Matter. Read more.

• • • • •

EE Student Information, Spring Quarter through Academic Year 2020-2021: FAQs and Updated EE Course List.

Updates will be posted on this page, as well as emailed to the EE student mail list.

Please see Stanford University Health Alerts for course and travel updates.

As always, use your best judgement and consider your own and others' well-being at all times.

SCIEN Talk

SCIEN Seminar presents "Image recovery with untrained convolutional neural networks"

Topic: 
Image recovery with untrained convolutional neural networks
Abstract / Description: 

Convolutional Neural Networks are highly successful tools for image recovery and restoration. A major contributing factor to this success is that convolutional networks impose strong prior assumptions about natural images—so strong that they enable image recovery without any training data. A surprising observation that highlights those prior assumptions is that one can remove noise from a corrupted natural image by simply fitting (via gradient descent) a randomly initialized, over-parameterized convolutional generator to the noisy image.
In this talk, we discuss a simple un-trained convolutional network, called the deep decoder, that provably enables image denoising and regularization of inverse problems such as compressive sensing with excellent performance. We formally characterize the dynamics of fitting this convolutional network to a noisy signal and to an under-sampled signal, and show that in both cases early-stopped gradient descent provably recovers the clean signal. Finally, we discuss our own numerical results and numerical results from another group demonstrating that un-trained convolutional networks enable magnetic resonance imaging from highly under-sampled measurements.

Date and Time: 
Wednesday, April 8, 2020 - 4:30pm
Venue: 
Zoom (join SCIEN mail list to receive meeting ID)

SCIEN and EE292E present "An Integrated 6DoF Video Camera and System Design"

Topic: 
An Integrated 6DoF Video Camera and System Design
Abstract / Description: 

Designing a fully integrated 360◦ video camera supporting 6DoF head motion parallax requires overcoming many technical hurdles, including camera placement, optical design, sensor resolution, system calibration, real-time video capture, depth reconstruction, and real-time novel view synthesis. While there is a large body of work describing various system components, such as multi-view depth estimation, our paper is the first to describe a complete, reproducible system that considers the challenges arising when designing, building, and deploying a full end-to-end 6DoF video camera and playback environment. Our system includes a computational imaging software pipeline supporting online markerless calibration, high-quality reconstruction, and real-time streaming and rendering. Most of our exposition is based on a professional 16-camera configuration, which will be commercially available to film producers. However, our software pipeline is generic and can handle a variety of camera geometries and configurations. The entire calibration and reconstruction software pipeline along with example datasets is open sourced to encourage follow-up research in high-quality 6DoF video reconstruction and rendering.

More information: An Integrated 6DoF Video Camera and Systems Design

Open source repository: https://github.com/facebook/facebook360_dep

Date and Time: 
Wednesday, March 4, 2020 - 4:30pm
Venue: 
Packard 101

SCIEN and EE292E present "Deep Learning for Practical and Robust View Synthesis"

Topic: 
Deep Learning for Practical and Robust View Synthesis
Abstract / Description: 

I will present recent work ("Local Light Field Fusion") on a practical and robust deep learning solution for capturing and rendering novel views of complex real world scenes for virtual exploration. Our view synthesis algorithm operates on an irregular grid of sampled views, first expanding each sampled view into a local light field via a multiplane image (MPI) scene representation, then rendering novel views by blending adjacent local light fields. We extend traditional plenoptic sampling theory to derive a bound that specifies precisely how densely users should sample views of a given scene when using our algorithm. In practice, we can apply this bound to capture and render views of real world scenes that achieve the perceptual quality of Nyquist rate view sampling while using up to 4000x fewer views.

Date and Time: 
Wednesday, February 26, 2020 - 4:30pm
Venue: 
Packard 101

SCIEN and EE292E present "A fundamentally new sensing approach for high-level autonomous driving"

Topic: 
A fundamentally new sensing approach for high-level autonomous driving
Abstract / Description: 

It has been fifteen years since Stanford won the DARPA Grand Challenge. Since then, a new race is underway in the automotive industry to fulfill the ultimate mobility dream: driverless vehicles for all. Trying to elevate test track vehicles to automotive grade products has been a humbling experience for everyone. The mainstream approach has relied primarily on brute force for both hardware and software. Sensors in particular have become overly complex and unscalable in response to escalating hardware requirements. To reverse this trend, Perceptive has been developing a fundamentally new sensing platform for fully autonomous vehicles. Based on Digital Remote Imaging and Edge AI, the platform shifts the complexity to the software and scales with the compute. This talk will discuss the system architecture and underlying physics.

Date and Time: 
Wednesday, February 19, 2020 - 4:30pm
Venue: 
Packard 101

SCIEN and EE292E present "Beyond lenses: Computational imaging with a light-modulating mask"

Topic: 
Beyond lenses: Computational imaging with a light-modulating mask
Abstract / Description: 

The lens has long been a central element of cameras, its role to refract light to achieve a one-to-one mapping between a point in the scene and a point on the sensor. We propose a radical departure from this practice and the limitations it imposes. In this talk I will discuss our recent efforts to build extremely thin imaging devices by replacing the lens in a conventional camera with a light-modulating mask and computational reconstruction algorithms. These lensless cameras can be less than a millimeter in thickness and enable applications where size, weight, thickness, or cost are the driving factors.

Date and Time: 
Wednesday, February 12, 2020 - 4:30pm
Venue: 
Packard 101

SCIEN and EE292E presents Towards Immersive Telepresence: Stereoscopic 360-degree Vision in Realtime

Topic: 
Towards Immersive Telepresence: Stereoscopic 360-degree Vision in Realtime
Abstract / Description: 

The technological advances in immersive telepresence are greatly impeded by the challenges that need to be met when mediating the realistic feeling of presence in a remote environment to a local human user. Providing a stereoscopic 360° visual representation of the distant scene further fosters the level of realism and greatly improves task performance. State-of-the-art technology has primarily developed catadioptric or multi-camera systems to address this issue. Current solutions are bulky, not realtime capable, and tend to produce erroneous image content due to the stitching processes involved, which are prone to perform poorly for texture-less scenes. In this talk, I will introduce a vision on-demand approach that creates stereoscopic scene information upon request. A real-time capable camera system along with a novel deep learning-based delay-compensation paradigm will be presented that provides instant visual feedback for highly immersive telepresence.

Date and Time: 
Wednesday, February 5, 2020 - 4:30pm
Venue: 
Packard 101

SCIEN and EE292E present "DiffuserCam: Lenseless single-exposure 3D imaging"

Topic: 
DiffuserCam: Lenseless single-exposure 3D imaging
Abstract / Description: 

Traditional lenses are optimized for 2D imaging, which prevents them from capturing extra dimensions of the incident light field (e.g. depth or high-speed dynamics) without multiple exposures or moving parts. Leveraging ideas from compressed sensing, I replace the lens of a traditional camera with a single pseudorandom free-form optic called a diffuser. The diffuser creates a pseudorandom point spread function which multiplexes these extra dimensions into a single 2D exposure taken with a standard sensor. The image is then recovered by solving a sparsity-constrained inverse problem. This lensless camera, dubbed DiffuserCam, is capable of snapshot 3D imaging at video rates, encoding a high-speed video (>4,500 fps) into a single rolling-shutter exposure, and video-rate 3D imaging of fluorescence signals, such as neurons, in a device weighing under 3 grams.

Date and Time: 
Wednesday, January 22, 2020 - 4:30pm
Venue: 
Packard 101

SCIEN and EE292E present "Matching Visual Acuity and Prescription: Towards AR for Humans"

Topic: 
"Matching Visual Acuity and Prescription: Towards AR for Humans"
Abstract / Description: 

In this talk, Dr. Jonghyun Kim will present two recent AR display prototypes inspired by human visual system. The first, Foveated AR dynamically provides high-resolution virtual image to the user's foveal region based on the tracked gaze. The second, Prescription AR is a prescription-embedded fully-customized AR display systems which works as user's eyeglasses and AR display at the same time. Finally, he will discuss on important issues for socially-acceptable AR display systems including customization, privacy, fashion, and eye-contact interaction, and how they are related to the display technologies.

Date and Time: 
Wednesday, January 15, 2020 - 4:30pm
Venue: 
Packard 101

SCIEN and EE292E present "Towards immersive AR experiences in monocular video"

Topic: 
Towards immersive AR experiences in monocular video
Abstract / Description: 

AR on handheld, monocular, "through-the-camera" platforms such as mobile phones is a challenging task. While traditional, geometry based approaches provide useful data in certain scenarios, for truly immersive experiences we need to leverage the prior knowledge encapsulated in learned CNNs. In this talk I will discuss the capabilities and limitations of such traditional methods, the need for CNN-based solutions, and the challenges to training accurate and efficient CNNs on this task. I will describe our recent work on implicit, 3D representations for AR, with applications in novel view synthesis, scene reconstruction and arbitrary object manipulation. Finally, I will present a project opportunity, to learn such representations from a dataset of single images.

Date and Time: 
Wednesday, January 8, 2020 - 4:30pm
Venue: 
Packard 101

SCIEN Colloquium and EE 292E present "How to Learn a Camera”

Topic: 
Light Fields: From Shape Recovery to Sparse Reconstruction
Abstract / Description: 

Traditionally, the image processing pipelines of consumer cameras have been carefully designed, hand-engineered systems. But treating an imaging pipeline as something to be learned instead of something to be engineered has the potential benefits of being faster, more accurate, and easier to tune. Relying on learning in this fashion presents a number of challenges, such as fidelity, fairness, and data collection, which can be addressed through careful consideration of neural network architectures as they relate to the physics of image formation. In this talk I'll be presenting recent work from Google's computational photography research team on using machine learning to replace traditional building blocks of a camera pipeline. I will present learning based solutions for the classic tasks of denoising, white balance, and tone mapping, each of which uses a bespoke ML architecture that is designed around the specific constraints and demands of each task. By designing learning-based solutions around the structure provided by optics and camera hardware, we are able to produce state-of-the-art solutions to these three tasks in terms of both accuracy and speed.

Date and Time: 
Wednesday, December 4, 2019 - 4:30pm
Venue: 
Packard 101

Pages

Subscribe to RSS - SCIEN Talk