EE Student Information

EE Student Information, Spring Quarter 19-20: FAQs and Updated EE Course List.

Updates will be posted on this page, as well as emailed to the EE student mail list.

Please see Stanford University Health Alerts for course and travel updates.

As always, use your best judgement and consider your own and others' well-being at all times.

SCIEN Talk

SCIEN and EE292E present "Deep Learning for Practical and Robust View Synthesis"

Topic: 
Deep Learning for Practical and Robust View Synthesis
Abstract / Description: 

I will present recent work ("Local Light Field Fusion") on a practical and robust deep learning solution for capturing and rendering novel views of complex real world scenes for virtual exploration. Our view synthesis algorithm operates on an irregular grid of sampled views, first expanding each sampled view into a local light field via a multiplane image (MPI) scene representation, then rendering novel views by blending adjacent local light fields. We extend traditional plenoptic sampling theory to derive a bound that specifies precisely how densely users should sample views of a given scene when using our algorithm. In practice, we can apply this bound to capture and render views of real world scenes that achieve the perceptual quality of Nyquist rate view sampling while using up to 4000x fewer views.

Date and Time: 
Wednesday, February 26, 2020 - 4:30pm
Venue: 
Packard 101

SCIEN and EE292E present "A fundamentally new sensing approach for high-level autonomous driving"

Topic: 
A fundamentally new sensing approach for high-level autonomous driving
Abstract / Description: 

It has been fifteen years since Stanford won the DARPA Grand Challenge. Since then, a new race is underway in the automotive industry to fulfill the ultimate mobility dream: driverless vehicles for all. Trying to elevate test track vehicles to automotive grade products has been a humbling experience for everyone. The mainstream approach has relied primarily on brute force for both hardware and software. Sensors in particular have become overly complex and unscalable in response to escalating hardware requirements. To reverse this trend, Perceptive has been developing a fundamentally new sensing platform for fully autonomous vehicles. Based on Digital Remote Imaging and Edge AI, the platform shifts the complexity to the software and scales with the compute. This talk will discuss the system architecture and underlying physics.

Date and Time: 
Wednesday, February 19, 2020 - 4:30pm
Venue: 
Packard 101

SCIEN and EE292E present "Beyond lenses: Computational imaging with a light-modulating mask"

Topic: 
Beyond lenses: Computational imaging with a light-modulating mask
Abstract / Description: 

The lens has long been a central element of cameras, its role to refract light to achieve a one-to-one mapping between a point in the scene and a point on the sensor. We propose a radical departure from this practice and the limitations it imposes. In this talk I will discuss our recent efforts to build extremely thin imaging devices by replacing the lens in a conventional camera with a light-modulating mask and computational reconstruction algorithms. These lensless cameras can be less than a millimeter in thickness and enable applications where size, weight, thickness, or cost are the driving factors.

Date and Time: 
Wednesday, February 12, 2020 - 4:30pm
Venue: 
Packard 101

SCIEN and EE292E presents Towards Immersive Telepresence: Stereoscopic 360-degree Vision in Realtime

Topic: 
Towards Immersive Telepresence: Stereoscopic 360-degree Vision in Realtime
Abstract / Description: 

The technological advances in immersive telepresence are greatly impeded by the challenges that need to be met when mediating the realistic feeling of presence in a remote environment to a local human user. Providing a stereoscopic 360° visual representation of the distant scene further fosters the level of realism and greatly improves task performance. State-of-the-art technology has primarily developed catadioptric or multi-camera systems to address this issue. Current solutions are bulky, not realtime capable, and tend to produce erroneous image content due to the stitching processes involved, which are prone to perform poorly for texture-less scenes. In this talk, I will introduce a vision on-demand approach that creates stereoscopic scene information upon request. A real-time capable camera system along with a novel deep learning-based delay-compensation paradigm will be presented that provides instant visual feedback for highly immersive telepresence.

Date and Time: 
Wednesday, February 5, 2020 - 4:30pm
Venue: 
Packard 101

SCIEN and EE292E present "DiffuserCam: Lenseless single-exposure 3D imaging"

Topic: 
DiffuserCam: Lenseless single-exposure 3D imaging
Abstract / Description: 

Traditional lenses are optimized for 2D imaging, which prevents them from capturing extra dimensions of the incident light field (e.g. depth or high-speed dynamics) without multiple exposures or moving parts. Leveraging ideas from compressed sensing, I replace the lens of a traditional camera with a single pseudorandom free-form optic called a diffuser. The diffuser creates a pseudorandom point spread function which multiplexes these extra dimensions into a single 2D exposure taken with a standard sensor. The image is then recovered by solving a sparsity-constrained inverse problem. This lensless camera, dubbed DiffuserCam, is capable of snapshot 3D imaging at video rates, encoding a high-speed video (>4,500 fps) into a single rolling-shutter exposure, and video-rate 3D imaging of fluorescence signals, such as neurons, in a device weighing under 3 grams.

Date and Time: 
Wednesday, January 22, 2020 - 4:30pm
Venue: 
Packard 101

SCIEN and EE292E present "Matching Visual Acuity and Prescription: Towards AR for Humans"

Topic: 
"Matching Visual Acuity and Prescription: Towards AR for Humans"
Abstract / Description: 

In this talk, Dr. Jonghyun Kim will present two recent AR display prototypes inspired by human visual system. The first, Foveated AR dynamically provides high-resolution virtual image to the user's foveal region based on the tracked gaze. The second, Prescription AR is a prescription-embedded fully-customized AR display systems which works as user's eyeglasses and AR display at the same time. Finally, he will discuss on important issues for socially-acceptable AR display systems including customization, privacy, fashion, and eye-contact interaction, and how they are related to the display technologies.

Date and Time: 
Wednesday, January 15, 2020 - 4:30pm
Venue: 
Packard 101

SCIEN and EE292E present "Towards immersive AR experiences in monocular video"

Topic: 
Towards immersive AR experiences in monocular video
Abstract / Description: 

AR on handheld, monocular, "through-the-camera" platforms such as mobile phones is a challenging task. While traditional, geometry based approaches provide useful data in certain scenarios, for truly immersive experiences we need to leverage the prior knowledge encapsulated in learned CNNs. In this talk I will discuss the capabilities and limitations of such traditional methods, the need for CNN-based solutions, and the challenges to training accurate and efficient CNNs on this task. I will describe our recent work on implicit, 3D representations for AR, with applications in novel view synthesis, scene reconstruction and arbitrary object manipulation. Finally, I will present a project opportunity, to learn such representations from a dataset of single images.

Date and Time: 
Wednesday, January 8, 2020 - 4:30pm
Venue: 
Packard 101

SCIEN Colloquium and EE 292E present "How to Learn a Camera”

Topic: 
Light Fields: From Shape Recovery to Sparse Reconstruction
Abstract / Description: 

Traditionally, the image processing pipelines of consumer cameras have been carefully designed, hand-engineered systems. But treating an imaging pipeline as something to be learned instead of something to be engineered has the potential benefits of being faster, more accurate, and easier to tune. Relying on learning in this fashion presents a number of challenges, such as fidelity, fairness, and data collection, which can be addressed through careful consideration of neural network architectures as they relate to the physics of image formation. In this talk I'll be presenting recent work from Google's computational photography research team on using machine learning to replace traditional building blocks of a camera pipeline. I will present learning based solutions for the classic tasks of denoising, white balance, and tone mapping, each of which uses a bespoke ML architecture that is designed around the specific constraints and demands of each task. By designing learning-based solutions around the structure provided by optics and camera hardware, we are able to produce state-of-the-art solutions to these three tasks in terms of both accuracy and speed.

Date and Time: 
Wednesday, December 4, 2019 - 4:30pm
Venue: 
Packard 101

SCIEN Colloquium and EE 292E present "Simulation Technologies for Image Systems Engineering"

Topic: 
Simulation Technologies for Image Systems Engineering
Abstract / Description: 

The use of imaging systems has grown enormously over the last several decades; these systems are an essential component in mobile communication, medicine, and automotive applications. As imaging applications have expanded the complexity of imaging systems hardware - from optics to electronics - has increased dramatically. The increased complexity makes software prototyping an essential tool for the design of novel systems and the evaluation of components. I will describe several simulations we created for image systems engineering applications: (a) designing cameras for autonomous vehicles [1], (b) simulating image encoding by the human eye and retina for image quality assessments [2], and (c) assessing the spatial sensitivity of CNNs for multiple applications [3]. This is a good moment to consider how academia and industry might cooperate to create an image systems simulation infrastructure that speeds the development of new systems for the many opportunities that will arise over the next few decades.

Date and Time: 
Wednesday, November 20, 2019 - 4:30pm
Venue: 
Packard 101

SCIEN Research Initiatives

Topic: 
various research initiatives
Abstract / Description: 

Each year the Stanford Center for Image Systems Engineering (SCIEN) holds an annual meeting for its' Industry Affiliate Member companies. The talks introduce new Stanford faculty who are advancing imaging science and technology. The poster presentations introduce postdoctoral researchers and graduate students who are working in computational imaging with expertise in image systems engineering, including optics, sensors, processing, machine learning, displays and human perception.

Please see the details on SCIEN website.

Date and Time: 
Friday, December 6, 2019 - 1:30pm
Venue: 
Packard 101 and Atrium

Pages

Subscribe to RSS - SCIEN Talk