EE Student Information

The Department of Electrical Engineering supports Black Lives Matter. Read more.

• • • • •

EE Student Information, Spring & Summer Quarters 19-20: FAQs and Updated EE Course List.

Updates will be posted on this page, as well as emailed to the EE student mail list.

Please see Stanford University Health Alerts for course and travel updates.

As always, use your best judgement and consider your own and others' well-being at all times.

SCIEN Talk

SCIEN & EE 292E: Pushing the Limits of Fluorescence Microscopy with adaptive imaging and machine learning

Topic: 
Pushing the Limits of Fluorescence Microscopy with adaptive imaging and machine learning
Abstract / Description: 

Fluorescence microscopy lets biologist see and understand the intricate machinery at the heart of living systems and has led to numerous discoveries. Any technological progress towards improving image quality would extend the range of possible observations and would consequently open up the path to new findings. I will show how modern machine learning and smart robotic microscopes can push the boundaries of observability. One fundamental obstacle in microscopy takes the form of a trade-of between imaging speed, spatial resolution, light exposure, and imaging depth. We have shown that deep learning can circumvent these physical limitations: microscopy images can be restored even if 60-fold fewer photons are used during acquisition, isotropic resolution can be achieved even with a 10-fold under-sampling along the axial direction, and diffraction-limited structures can be resolved at 20-times higher frame-rates compared to state-of-the-art methods. Moreover, I will demonstrate how smart microscopy techniques can achieve the full optical resolution of light-sheet microscopes — instruments capable of capturing the entire developmental arch of an embryo from a single cell to a fully formed motile organism. Our instrument improves spatial resolution and signal strength two to five-fold, recovers cellular and sub-cellular structures in many regions otherwise not resolved, adapts to spatiotemporal dynamics of genetically encoded fluorescent markers and robustly optimises imaging performance during large-scale morphogenetic changes in living organisms.

Date and Time: 
Wednesday, May 16, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE 292E: Advances in automotive image sensors

Topic: 
Advances in automotive image sensors
Abstract / Description: 

In this talk I present recent advances in 2D and 3D image sensors for automotive applications such as rear view cameras, surround view cameras, ADAS cameras and in cabin driver monitoring cameras. This includes developments in high dynamic range image capture, LED flicker mitigation, high frame rate capture, global shutter, near infrared sensitivity and range imaging. I will also describe sensor developments for short range and long range LIDAR systems.

Date and Time: 
Wednesday, May 9, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE 292E: 3D single-molecule super-resolution microscopy using a tilted light sheet

Topic: 
3D single-molecule super-resolution microscopy using a tilted light sheet
Abstract / Description: 

To obtain a complete picture of subcellular structures, cells must be imaged with high resolution in all three dimensions (3D). In this talk, I will present tilted light sheet microscopy with 3D point spread functions (TILT3D), an imaging platform that combines a novel, tilted light sheet illumination strategy with engineered long axial range point spread functions (PSFs) for low-background, 3D super localization of single molecules as well as 3D super-resolution imaging in thick cells. Here the axial positions of the single molecules are encoded in the shape of the PSF rather than in the position or thickness of the light sheet. TILT3D is built upon a standard inverted microscope and has minimal custom parts. The result is simple and flexible 3D super-resolution imaging with tens of nm localization precision throughout thick mammalian cells. We validated TILT3D for 3D super-resolution imaging in mammalian cells by imaging mitochondria and the full nuclear lamina using the double-helix PSF for single-molecule detection and the recently developed Tetrapod PSFs for fiducial bead tracking and live axial drift correction. We think that TILT3D in the future will become an important tool not only for 3D super-resolution imaging, but also for live whole-cell single-particle and single-molecule tracking.

Date and Time: 
Wednesday, May 2, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE292E seminar: Video Coding before and beyond HEVC

Topic: 
Video Coding before and beyond HEVC
Abstract / Description: 

We are enjoying video contents in various situations. Though they are already compressed down to 1/10 - 1/1000 from its original size, it has been reported that video traffic over the internet is increasing 31% per year, within which the video traffic will occupy 82% by 2020. This is why development of better compression technology is eagerly demanded. ITU-T/ISO/IEC jointly developed the latest video coding standard, High Efficiency Video Coding (HEVC), in 2013. They are about to start next generation standard. Corresponding proposals will be evaluated at April 2018 meeting in San Diego, just a week before this talk.

In this talk, we will first overview the advances of video coding technology in the last several decades, latest topics including the report of the San Diego meeting, some new approaches including deep learning technique etc. will be presented.

Date and Time: 
Wednesday, April 25, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE292E seminar: Transport-Aware Cameras

Topic: 
Transport-Aware Cameras
Abstract / Description: 

Conventional cameras record all light falling onto their sensor regardless of its source or its 3D path to the camera. In this talk I will present a emerging family of coded-exposure video cameras that can be programmed to record just a fraction of the light coming from an artificial source---be it a common street lamp or a programmable projector---based on the light path's geometry or timing. Live video from these cameras offers a very unconventional view of our everyday world in which refraction and scattering can be notice with the naked eye can become apparent, and the flicker of electric lights can be turned into a powerful cue for analyzing the electrical grid from room to city.

I will discuss the unique optical properties and power efficiency of these "transport aware cameras" through three case studies: the ACam for analyzing the electrical grid, EpiScan3D for robust progress toward designing a computational CMOS sensor for coded two-bucket imaging---a novel capability that promises much more flexible and powerful transport-aware cameras compared to existing off-the-shelf solutions.

Date and Time: 
Wednesday, April 18, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE292E seminar: Light-field Display Architecture and the Heterogenous Display Ecosystem FoVI3D

Topic: 
Light-field Display Architecture and the Heterogenous Display Ecosystem FoVI3D
Abstract / Description: 

Human binocular vision and acuity, and the accompanying 3D retinal processing of the human eye and brain are specifically designed to promote situational awareness and understanding in the natural 3D world. The ability to resolve depth within a scene whether natural or artificial improves our spatial understanding of the scene and as a result reduces the cognitive load accompanying the analysis and collaboration on complex tasks.

A light-field display projects 3D imagery that is visible to the unaided eye (without glasses or head tracking) and allows for perspective correct visualization within the display's projection volume. Binocular disparity, occlusion, specular highlights and gradient shading, and other expected depth cues are correct from the viewer's perspective as in the natural real-world light-field.

Light-field displays are no longer a science fiction concept and a few companies are producing impressive light-field display prototypes. This presentation will review:
· The application agnostic light-field display architecture being developed at FoVI3D.
· General light-field display properties and characteristics such as field of view, directional resolution, and their effect on the 3D aerial image.
· The computation challenge for generating high-fidelity light-fields.
· A display agnostic ecosystem.

Demo after the talk: The FoVI3D Light-field Display Developer Kit (LfD DK2) is a prototype, wide field-of-view, full parallax, monochrome light-field display capable of projecting ~100,000,000 million unique rays to fill a 9cm x 9cm x 9cm projection volume. The particulars of the light-field compute, photonics subsystem and hogel optics will be discussed during the presentation.

Date and Time: 
Wednesday, April 11, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN presents Video-based Reconstruction of the Real World in Motion

Topic: 
Video-based Reconstruction of the Real World in Motion
Abstract / Description: 

New methods for capturing highly detailed models of moving real world scenes with cameras, i.e., models of detailed deforming geometry, appearance or even material properties, become more and more important in many application areas. They are needed in visual content creation, for instance in visual effects, where they are needed to build highly realistic models of virtual human actors. Further on, efficient, reliable and highly accurate dynamic scene reconstruction is nowadays an important prerequisite for many other application domains, such as: human-computer and human-robot interaction, autonomous robotics and autonomous driving, virtual and augmented reality, 3D and free-viewpoint TV, immersive telepresence, and even video editing.

The development of dynamic scene reconstruction methods has been a long standing challenge in computer graphics and computer vision. Recently, the field has seen important progress. New methods were developed that capture - without markers or scene instrumentation - rather detailed models of individual moving humans or general deforming surfaces from video recordings, and capture even simple models of appearance and lighting. However, despite this recent progress, the field is still at an early stage, and current technology is still starkly constrained in many ways. Many of today's state-of-the-art methods are still niche solutions that are designed to work under very constrained conditions, for instance: only in controlled studios, with many cameras, for very specific object types, for very simple types of motion and deformation, or at processing speeds far from real-time.

In this talk, I will present some of our recent works on detailed marker-less dynamic scene reconstruction and performance capture in which we advanced the state of the art in several ways. For instance, I will briefly show new methods for marker-less capture of the full body (like our VNECT approach) and hands that work in more general environments, and even in real-time and with one camera. I will then show some of our work on high-quality face performance capture and face reenactment. Here, I will also illustrate the benefits of both model-based and learning-based approaches and show how different ways to join the forces of the two open up new possibilities. Live demos included!

Date and Time: 
Wednesday, March 21, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Drone IoT Networks for Virtual Human Teleportation

Topic: 
Drone IoT Networks for Virtual Human Teleportation
Abstract / Description: 

Cyber-physical/human systems (CPS/CHS) are set to play an increasingly visible role in our lives, advancing research and technology across diverse disciplines. I am exploring novel synergies between three emerging CPS/CHS technologies of prospectively broad societal impact, virtual/augmented reality (VR/AR), the Internet of Things (IoT), and autonomous micro-aerial robots (UAVs). My long-term research objective is UAV-IoT-deployed ubiquitous VR/AR immersive communication that can enable virtual human teleportation to any corner of the world. Thereby, we can achieve a broad range of technological and societal advances that will enhance energy conservation, quality of life, and the global economy.
I am investigating fundamental problems at the intersection of signal acquisition and representation, communications and networking, (embedded) sensors and systems, and rigorous machine learning for stochastic control that arise in this context. I envision a future where UAV-IoT-deployed immersive communication systems will break existing barriers in remote sensing, monitoring, localization and mapping, navigation, and scene understanding. The presentation will outline some of my present and envisioned investigations. Interdisciplinary applications will be highlighted.

Date and Time: 
Wednesday, March 14, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Temporal coding of volumetric imagery

Topic: 
Temporal coding of volumetric imagery
Abstract / Description: 

'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned or captured in parallel and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption.

This talk describes systems and methods with which to efficiently detect and visualize image volumes by temporally encoding the extra dimensions' information into 2D measurements or displays. Some highlights of my research include video and 3D recovery from photographs, and true-3D augmented reality image display by time multiplexing. In the talk, I show how temporal optical coding can improve system performance, battery life, and hardware simplicity for a variety of platforms and applications.

Date and Time: 
Wednesday, March 7, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: ChromaBlur: Rendering Chromatic Eye Aberration Improves Accommodation and Realism

Topic: 
ChromaBlur: Rendering Chromatic Eye Aberration Improves Accommodation and Realism
Abstract / Description: 

Computer-graphics engineers and vision scientists want to generate images that reproduce realistic depth-dependent blur. Current rendering algorithms take into account scene geometry, aperture size, and focal distance, and they produce photorealistic imagery as with a high-quality camera. But to create immersive experiences, rendering algorithms should aim instead for perceptual realism. In so doing, they should take into account the significant optical aberrations of the human eye. We developed a method that, by incorporating some of those aberrations, yields displayed images that produce retinal images much closer to the ones that occur in natural viewing. In particular, we create displayed images taking the eye's chromatic aberration into account. This produces different chromatic effects in the retinal image for objects farther or nearer than current focus. We call the method ChromaBlur. We conducted two experiments that illustrate the benefits of ChromaBlur. One showed that accommodation (eye focusing) is driven quite effectively when ChromaBlur is used and that accommodation is not driven at all when conventional methods are used. The second showed that perceived depth and realism are greater with imagery created by ChromaBlur than in imagery created conventionally. ChromaBlur can be coupled with focus-adjustable lenses and gaze tracking to reproduce the natural relationship between accommodation and blur in HMDs and other immersive devices. It can thereby minimize the adverse effects of vergence-accommodation conflicts.

Date and Time: 
Wednesday, February 28, 2018 - 4:30pm
Venue: 
Packard 101

Pages

Subscribe to RSS - SCIEN Talk