EE Student Information

EE Student Information, Spring Quarter 19-20: FAQs and Updated EE Course List.

Updates will be posted on this page, as well as emailed to the EE student mail list.

Please see Stanford University Health Alerts for course and travel updates.

As always, use your best judgement and consider your own and others' well-being at all times.

SCIEN Talk

SCIEN presents "Codification Design in Compressive Imaging"

Topic: 
Codification Design in Compressive Imaging
Abstract / Description: 

Compressive imaging enables faster acquisitions by capturing coded projections of the scenes. Codification elements used in compressive imaging systems include lithographic masks, gratings and micro-polarizers, which sense spatial, spectral, and temporal data. Codification plays a key role in compressive imaging as it determines the number of projections needed for correct reconstruction. In general, random coding patterns are sufficient for accurate reconstruction. Still, more recent studies have shown that code design not only yields to improved image reconstructions, but it can also reduce the number of required projections. This talk covers different tools for codification design in compressive imaging, such as the restricted isometry property, geometric and deep learning approaches. Applications in compressive spectral video, compressive X-ray computed tomography, and seismic acquisition will be also discussed.

Date and Time: 
Wednesday, June 3, 2020 - 4:30pm
Venue: 
Zoom (join SCIEN mail list to receive meeting ID)

SCIEN presents "Mojo Lens, the First True Smart Contact Lens"

Topic: 
Mojo Lens, the First True Smart Contact Lens
Abstract / Description: 

After working in stealth for over 4 years, Mojo Vision recently unveiled the first working prototype of Mojo Lens, a smart contact lens designed to deliver augmented reality content wherever you look. This talk will provide an overview of the company, its vision of "Invisible Computing", the science behind the world's first contact lens display, and a first-hand account of what it's like to actually wear Mojo Lens.

Date and Time: 
Wednesday, May 27, 2020 - 4:30pm
Venue: 
Zoom (join SCIEN mail list to receive meeting ID)

SCIEN presents "Learned Image Synthesis for Computational Displays"

Topic: 
Learned Image Synthesis for Computational Displays
Abstract / Description: 

Addressing vergence-accommodation conflict in head-mounted displays (HMDs) requires resolving two interrelated problems. First, the hardware must support viewing sharp imagery over the full accommodation range of the user. Second, HMDs should accurately reproduce retinal defocus blur to correctly drive accommodation. A multitude of accommodation-supporting HMDs have been proposed, with three architectures receiving particular attention: varifocal, multifocal, and light field displays. These designs all extend depth of focus, but rely on computationally expensive rendering and optimization algorithms to reproduce accurate defocus blur (often limiting content complexity and interactive applications). To date, no unified framework has been proposed to support driving these emerging HMDs using commodity content. In this talk, we will present DeepFocus, a generic, end-to-end convolutional neural network designed to efficiently solve the full range of computational tasks for accommodation-supporting HMDs. This network is demonstrated to accurately synthesize defocus blur, focal stacks, multilayer decompositions, and multiview imagery using only commonly available RGB-D images, enabling real-time, near-correct depictions of retinal blur with a broad set of accommodation-supporting HMDs.

Date and Time: 
Wednesday, May 20, 2020 - 4:30pm
Venue: 
Zoom (join SCIEN mail list to receive meeting ID)

SCIEN welcomes Dr. Adam Rowell, Lucid

Topic: 
TBA
Abstract / Description: 

Adam is the co-founder and CTO of Lucid. His breakthrough research in computer vision and signal processing as a PhD at Stanford powers the technology behind Lucid's 3D Fusion Technology, world's first AI-based 3D and depth capture technology mimicking human vision in dual/multi camera devices which is currently deployed in their own product, the LucidCam, in most of mobile phones, robots, and aiming to be in autonomous cars. He worked for many years in the industry for Exponent as a consultant focused on machine learning and computer vision development, from consumer to business to military applications, coding and leading engineering teams to build the most advanced GPU/NPU based systems in the industry. Afterwards, he joined Maxim Integrated in their Advanced Analytics team, optimizing the organization of the 10,000 employee public company from the ground up. Adam defines the technology direction and leads Lucid's engineering team

Date and Time: 
Wednesday, May 13, 2020 - 4:30pm
Venue: 
Zoom (join SCIEN mail list to receive meeting ID)

SCIEN Seminar presents "Bio-inspired depth sensing using computational optics"

Topic: 
Bio-inspired depth sensing using computational optics
Abstract / Description: 

Jumping spiders rely on accurate depth perception for predation and navigation. They accomplish depth perception, despite their tiny brains, by using specialized optics. Each principal eye includes a multitiered retina that simultaneously receives multiple images with different amounts of defocus, and distance is decoded from these images with seemingly little computation. In this talk, I will introduce two depth sensors that are inspired by jumping spiders. They use computational optics and build upon previous depth-from-defocus algorithms in computer vision. Both sensors operate without active illumination, and they are both monocular and computationally efficient.
The first sensor synchronizes an oscillating deformable lens with a photosensor. It produces depth and confidence maps at more than 100 frames per second and has the advantage of being able to extend its working range through optical accommodation. The second sensor uses a custom-designed metalens, which is an ultra-thin device with 2D nano-structures that modulate traversing light. The metalens splits incoming light and simultaneously forms two differently-defocused images on a planar photosensor, allowing the efficient computation of depth and confidence from a single snapshot in time.

Date and Time: 
Wednesday, May 6, 2020 - 4:30pm
Venue: 
Zoom (join SCIEN mail list to receive meeting ID)

SCIEN Seminar presents "Insight into the inner workings of Intel’s Stereo and Lidar Depth Cameras"

Topic: 
Insight into the inner workings of Intel’s Stereo and Lidar Depth Cameras
Abstract / Description: 

This talk will provide an overview of the technology and capabilities of Intel's RealSense Stereo and Lidar Depth Cameras, and will then progress to describe new features, such as high-speed capture, multi-camera enhancements, optical filtering, and near-range high-resolution depth imaging. Finally, we will introduce a new fast on-chip calibration method that can be used to improve the performance of a stereo camera and help mitigate some common stereo artifacts.

Date and Time: 
Wednesday, April 29, 2020 - 4:30pm
Venue: 
Zoom (join SCIEN mail list to receive meeting ID)

SCIEN Seminar presents "The Extreme Science of Building High-Performing Compact Cameras for Space Applications"

Topic: 
The Extreme Science of Building High-Performing Compact Cameras for Space Applications
Abstract / Description: 

A thickening flock of earth-observing satellites blankets the planet. Over 700 were launched during the past 10 years, and more than 2,200 additional ones are scheduled to go up within the next 10 years. To add to that, year on year, satellite platforms and instruments are being miniaturized to improve cost-efficiency with the same expectations of high spatial and spectral resolutions. But what does it take to build imaging systems that are high-performing in the harsh environment of space but cost-efficient and compact at the same time? This talk will touch upon the technical issues associated with the design, fabrication, and characterisation of such extremely high-performing but still compact and cost-efficient space cameras taking the example of the imager that Pixxel has built as part of its earth-imaging satellite constellations plans.

Date and Time: 
Wednesday, April 22, 2020 - 4:30pm
Venue: 
Zoom (join SCIEN mail list to receive meeting ID)

SCIEN Seminar presents "The Role of Fundamental Limits in 3D Imaging Systems: From Looking around Corners to Fast 3D Cameras"

Topic: 
The Role of Fundamental Limits in 3D Imaging Systems: From Looking around Corners to Fast 3D Cameras
Abstract / Description: 

The knowledge about limits is a precious commodity in computational imaging: By knowing that our imaging device already operates at the physical limit (e.g. of resolution), we can avoid unnecessary investments in better hardware, such as faster detectors, better optics or cameras with higher pixel resolution. Moreover, limits often appear as uncertainty products, making it possible to bargain with nature for a better measurement by sacrificing less important information.

In this talk, the role of physical and information limits in computational imaging will be discussed using examples from two of my recent projects: 'Synthetic Wavelength Holography' and the 'Single-Shot 3D Movie Camera'.

Synthetic Wavelength Holography is a novel method to image hidden objects around corners and through scattering media. While other approaches rely on time-of-flight detectors, which suffer from technical limitations in spatial and temporal resolution, Synthetic Wavelength Holography works at the physical limit of the space-bandwidth product. Full field measurements of hidden objects around corners or through scatterers reaching sub-mm resolution will be presented.

The single-shot 3D movie camera is a highly precise 3D sensor for the measurement of fast macroscopic live scenes. From each 1 Mpix camera frame, the sensor delivers 300,000 independent 3D points with high resolution. The single-shot ability allows for a continuous 3D measurement of fast moving or deforming objects, resulting in a continuous 3D movie. Like a hologram, each movie-frame encompasses the full 3D information about the object surface, and the observation perspective can be varied while watching the 3D movie.

Date and Time: 
Wednesday, April 15, 2020 - 4:30pm
Venue: 
Zoom (join SCIEN mail list to receive meeting ID)

SCIEN Seminar presents "Image recovery with untrained convolutional neural networks"

Topic: 
Image recovery with untrained convolutional neural networks
Abstract / Description: 

Convolutional Neural Networks are highly successful tools for image recovery and restoration. A major contributing factor to this success is that convolutional networks impose strong prior assumptions about natural images—so strong that they enable image recovery without any training data. A surprising observation that highlights those prior assumptions is that one can remove noise from a corrupted natural image by simply fitting (via gradient descent) a randomly initialized, over-parameterized convolutional generator to the noisy image.
In this talk, we discuss a simple un-trained convolutional network, called the deep decoder, that provably enables image denoising and regularization of inverse problems such as compressive sensing with excellent performance. We formally characterize the dynamics of fitting this convolutional network to a noisy signal and to an under-sampled signal, and show that in both cases early-stopped gradient descent provably recovers the clean signal. Finally, we discuss our own numerical results and numerical results from another group demonstrating that un-trained convolutional networks enable magnetic resonance imaging from highly under-sampled measurements.

Date and Time: 
Wednesday, April 8, 2020 - 4:30pm
Venue: 
Zoom (join SCIEN mail list to receive meeting ID)

SCIEN and EE292E present "An Integrated 6DoF Video Camera and System Design"

Topic: 
An Integrated 6DoF Video Camera and System Design
Abstract / Description: 

Designing a fully integrated 360◦ video camera supporting 6DoF head motion parallax requires overcoming many technical hurdles, including camera placement, optical design, sensor resolution, system calibration, real-time video capture, depth reconstruction, and real-time novel view synthesis. While there is a large body of work describing various system components, such as multi-view depth estimation, our paper is the first to describe a complete, reproducible system that considers the challenges arising when designing, building, and deploying a full end-to-end 6DoF video camera and playback environment. Our system includes a computational imaging software pipeline supporting online markerless calibration, high-quality reconstruction, and real-time streaming and rendering. Most of our exposition is based on a professional 16-camera configuration, which will be commercially available to film producers. However, our software pipeline is generic and can handle a variety of camera geometries and configurations. The entire calibration and reconstruction software pipeline along with example datasets is open sourced to encourage follow-up research in high-quality 6DoF video reconstruction and rendering.

More information: An Integrated 6DoF Video Camera and Systems Design

Open source repository: https://github.com/facebook/facebook360_dep

Date and Time: 
Wednesday, March 4, 2020 - 4:30pm
Venue: 
Packard 101

Pages

Subscribe to RSS - SCIEN Talk