EE Student Information

The Department of Electrical Engineering supports Black Lives Matter. Read more.

• • • • •

EE Student Information, Spring & Summer Quarters 19-20: FAQs and Updated EE Course List.

Updates will be posted on this page, as well as emailed to the EE student mail list.

Please see Stanford University Health Alerts for course and travel updates.

As always, use your best judgement and consider your own and others' well-being at all times.

SCIEN Talk

SCIEN Talk presents Computational Single-Photon Imaging

Topic: 
Computational Single-Photon Imaging
Abstract / Description: 

[please note: this week's speaker has changed] Time-of-flight imaging and LIDAR systems enable 3D scene acquisition at long range using active illumination. This is useful for autonomous driving, robotic vision, human-computer interaction and many other applications. The technological requirements on these imaging systems are extreme: individual photon events need to be recorded and time-stamped at a picosecond timescale, which is facilitated by emerging single-photon detectors. In this talk, we discuss a new class of computational cameras based on single-photon detectors. These enable efficient ways for non-line-of-sight imaging (i.e., looking around corners) and efficient depth sensing as well as other unprecedented imaging modalities.

Date and Time: 
Wednesday, November 7, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Wavefront coding techniques and resolution limits for light field microscopy

Topic: 
Wavefront coding techniques and resolution limits for light field microscopy
Abstract / Description: 

Light field microscopy is a rapid, scan-less volume imaging technique that requires only a standard wide field fluorescence microscope and a microlens array. Unlike scanning microscopes, which collect volumetric information over time, the light field microscope captures volumes synchronously in a single photographic exposure, and at speeds limited only by the frame rate of the image sensor. This is made possible by the microlens array, which focuses light onto the camera sensor so that each position in the volume is mapped onto the sensor as a unique light intensity pattern. These intensity patterns are the position-dependent point response functions of the light field microscope. With prior knowledge of these point response functions, it is possible to "decode" 3-D information from a raw light field image and computationally reconstruct a full volume. In this talk I present an optical model for light field microscopy based on wave optics that accurately models light field point response functions. I describe an algorithm that solves for volumes using a GPU-accelerated iterative algorithm, and discuss priors that are useful for reconstructing biological specimens. I then explore the diffraction limit that applies for light field microscopy, and how it gives rise to a position-dependent resolution limits for this microscope. I'll explain how these limits differ from more familiar resolution metrics commonly used in 3-D scanning microscopy, like the Rayleigh limit and the optical transfer function (OTF). Using this theory of resolution limits for the light field microscope, I explore new wavefront coding techniques that can modify the light field resolution limits and can address certain common reconstruction artifacts, at least to a degree. Certain resolution trade-offs exist that suggest that light field microscopy is just one of potentially many useful forms of computational microscopy. Finally, I describe our application of light field microscopy in neuroscience where we have used it to record calcium activity in populations of neurons within the brains of awake, behaving animals.

Date and Time: 
Wednesday, October 31, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Is it real? Deep Neural Face Reconstruction and Rendering

Topic: 
Is it real? Deep Neural Face Reconstruction and Rendering
Abstract / Description: 

A broad range of applications in visual effects, computer animation, autonomous driving, and man-machine interaction heavily depend on robust and fast algorithms to obtain high-quality reconstructions of our physical world in terms of geometry, motion, reflectance, and illumination. Especially, with the increasing popularity of virtual, augmented and mixed reality devices, there comes a rising demand for real-time and low-latency solutions.

This talk covers data-parallel optimization and state-of-the-art machine learning techniques to tackle the underlying 3D and 4D reconstruction problems based on novel mathematical models and fast algorithms. The particular focus of this talk is on self-supervised face reconstruction from a collection of unlabeled in-the-wild images. The proposed approach can be trained end-to-end without dense annotations by fusing a convolutional encoder with a differentiable expert-designed renderer and a self-supervised training loss.

The resulting reconstructions are the foundation for advanced video editing effects, such as photo-realistic re-animation of portrait videos. The core of the proposed approach is a generative rendering-to-video translation network that takes computer graphics renderings as input and generates photo-realistic modified target videos that mimic the source content. With the ability to freely control the underlying parametric face model, we are able to demonstrate a large variety of video rewrite applications. For instance, we can reenact the full head using interactive user-controlled editing and realize high-fidelity visual dubbing.

Date and Time: 
Wednesday, October 24, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Computational microscopy of dynamic order across biological scales

Topic: 
Computational microscopy of dynamic order across biological scales
Abstract / Description: 

Living systems are characterized by emergent behavior of ordered components. Imaging technologies that reveal dynamic arrangement of organelles in a cell and of cells in a tissue are needed to understand the emergent behavior of living systems. I will present an overview of challenges in imaging dynamic order at the scales of cells and tissue, and discuss advances in computational label-free microscopy to overcome these challenges.

 

Date and Time: 
Wednesday, October 17, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: How to train neural networks on LiDAR point clouds

Topic: 
How to train neural networks on LiDAR point clouds
Abstract / Description: 

Accurate LiDAR classification and segmentation is required for developing critical ADAS & Autonomous Vehicles components. Mainly, its required for high definition mapping and developing perception and path/motion planning algorithms. This talk will cover best practices for how to accurately annotate and benchmark your AV/ADAS models against LiDAR point cloud ground truth training data.

 

Date and Time: 
Wednesday, October 10, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE 292E: The challenge of large-scale brain imaging

Topic: 
The challenge of large-scale brain imaging
Abstract / Description: 

Advanced optical microscopy techniques have enabled the recording and stimulation of large populations of neurons deep within living, intact animal brains. I will present a broad overview of these techniques, and discuss challenges that still remain in performing large-scale imaging with high spatio-temporal resolution, along with various strategies that are being adopted to address these challenges.

Date and Time: 
Wednesday, October 3, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk, eWear seminar: 'Immersive Technology and AI' with focus on mobile AR research

Topic: 
'Immersive Technology and AI' with focus on mobile AR research
Abstract / Description: 

Talk Title: Saliency in VR: How Do People Explore Virtual Environments,presented by Vincent Sitzmann

Understanding how people explore immersive virtual environments is crucial for many applications, such as designing virtual reality (VR) content, developing new compression algorithms, or learning computational models of saliency or visual attention. Whereas a body of recent work has focused on modeling saliency in desktop viewing conditions, VR is very different from these conditions in that viewing behavior is governed by stereoscopic vision and by the complex interaction of head orientation, gaze, and other kinematic constraints. To further our understanding of viewing behavior and saliency in VR, we capture and analyze gaze and head orientation data of 169 users exploring stereoscopic, static omni-directional panoramas, for a total of 1980 head and gaze trajectories for three different viewing conditions. We provide a thorough analysis of our data, which leads to several important insights, such as the existence of a particular fixation bias, which we then use to adapt existing saliency predictors to immersive VR conditions. In addition, we explore other applications of our data and analysis, including automatic alignment of VR video cuts, panorama thumbnails, panorama video synopsis, and saliency-based compression.

Talk Title: "Immersive Technology and AI" with focus on mobile AR research

Abstract: not available

 

Date and Time: 
Thursday, May 31, 2018 - 3:30pm
Venue: 
Spilker 232

SCIEN & EE 292E: Mobile VR for vision testing and treatment

Topic: 
Mobile VR for vision testing and treatment
Abstract / Description: 

Consumer-level HMDs are adequate for many medical applications. Vivid Vision (VV) takes advantage of their low cost, light weight, and large VR gaming code base to make vision tests and treatments. The company's software is built using the Unity engine, which allows for multiplatform support.in the Unity framework, allowing it to run on many hardware platforms. New headsets are available every six months or less, which creates interesting challenges within in the medical device space. VV's flagship product is the commercially available Vivid Vision System, used by more than 120 clinics to test and treat binocular dysfunctions such as convergence difficulties, amblyopia, strabismus, and stereo blindness. VV has recently developed a new, VR-based visual field analyzer.

Date and Time: 
Wednesday, June 6, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE 292E: Emerging LIDAR concepts and sensor technologies for autonomous vehicles

Topic: 
Emerging LIDAR concepts and sensor technologies for autonomous vehicles
Abstract / Description: 

Sensor technologies such as radar, camera, and LIDAR have become the key enablers for achieving higher levels of autonomous control in vehicles, from fleets to commercial. There are, however, still questions remaining: to what extent will radar and camera technologies continue to improve, and which LIDAR concepts will be the most successful? This presentation will provide an overview of the tradeoffs for LIDAR vs. competing sensor technologies (camera and radar); this discussion will reinforce the need for sensor fusion. We will also discuss the types of improvements that are necessary for each sensor technology. The presentation will summarize and compare various LIDAR designs -- mechanical, flash, MEMS-mirror based, optical phased array, and FMCW (frequency modulated continuous wave) -- and then discuss each LIDAR concept's future outlook. Finally, there will be a quick review of guidelines for selecting photonic components such as photodetectors, light sources, and MEMS mirrors.

Date and Time: 
Wednesday, May 30, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE 292E: LiDAR Technology for Autonomous Vehicles

Topic: 
LiDAR Technology for Autonomous Vehicles
Abstract / Description: 

LiDAR is a key sensor for autonomous vehicles that enables them to understand their surroundings in 3 dimensions. I will discuss the evolution of LiDAR, and describe various LiDAR technologies currently being developed. These include rotating sensors, MEMs and Optical Phase Array scanning devices, flash detector arrays, and single photon avalanche detectors. Requirements for autonomous vehicles are very challenging, and the different technologies each have advantages and disadvantages that will be discussed. The architecture of LiDAR also affects how it fits into the overall vehicle architecture. Image fusion with other sensors including radar, cameras, and ultrasound will be part of the overall solution. Other LiDAR applications including non-automotive transportation, mining, precision agriculture, UAV's, mapping, surveying, and security will be described.

Date and Time: 
Wednesday, May 23, 2018 - 4:30pm
Venue: 
Packard 101

Pages

Subscribe to RSS - SCIEN Talk