EE Student Information

The Department of Electrical Engineering supports Black Lives Matter. Read more.

• • • • •

EE Student Information, Spring Quarter through Academic Year 2020-2021: FAQs and Updated EE Course List.

Updates will be posted on this page, as well as emailed to the EE student mail list.

Please see Stanford University Health Alerts for course and travel updates.

As always, use your best judgement and consider your own and others' well-being at all times.

SCIEN Talk

Visual Vibration Analysis [SCIEN]

Topic: 
Visual Vibration Analysis
Abstract / Description: 

Davis will show how video can be a powerful way to measure physical vibrations. By relating the frequencies of subtle, often imperceptible changes in video to the vibrations of visible objects, we can reason about the physical properties of those objects and the forces that drive their motion. In my talk I'll show how this can be used to recover sound from silent video (Visual Microphone), estimate the material properties of visible objects (Visual Vibrometry), and learn enough about the physics of objects to create plausible image-space simulations (Dynamic Video).

Date and Time: 
Wednesday, February 15, 2017 - 4:30pm
Venue: 
Packard 101

A Learned Representation for Artistic Style [SCIEN]

Topic: 
A Learned Representation for Artistic Style
Abstract / Description: 

The diversity of painting styles represents a rich visual vocabulary for the construction of an image. The degree to which one may learn and parsimoniously capture this visual vocabulary measures our understanding of the higher level features of paintings, if not images in general. In this work we investigate the construction of a single, scalable deep network that can parsimoniously capture the artistic style of a diversity of paintings. We demonstrate that such a network generalizes across a diversity of artistic styles by reducing a painting to a point in an embedding space. Importantly, this model permits a user to explore new painting styles by arbitrarily combining the styles learned from individual paintings. We hope that this work provides a useful step towards building rich models of paintings and offers a window on to the structure of the learned representation of artistic style.

Date and Time: 
Wednesday, March 8, 2017 - 4:30pm
Venue: 
Packard 101

High-speed imaging meets single-cell analysis [SCIEN]

Topic: 
High-speed imaging meets single-cell analysis
Abstract / Description: 

High-speed imaging is an indispensable tool for blur-free observation and monitoring of fast transient dynamics in today's scientific research, industry, defense, and energy. The field of high-speed imaging has steadily grown since Eadweard Muybridge demonstrated motion-picture photography in 1878. High-speed cameras are commonly used for sports, manufacturing, collision testing, robotic vision, missile tracking, and fusion science and are even available to professional photographers. Over the last few years, high-speed imaging has been shown highly effective for single-cell analysis – the study of individual biological cells among populations for identifying cell-to-cell differences and elucidating cellular heterogeneity invisible to population-averaged measurements. The marriage of these seemingly unrelated disciplines has been made possible by exploiting high-speed imaging's capability of acquiring information-rich images at high frame rates to obtain a snapshot library of numerous cells in a short duration of time (with one cell per frame), which is useful for accurate statistical analysis of the cells. This is a paradigm shift in the field of high-speed imaging since the approach is radically different from its traditional use in slow-motion analysis. In this talk, I introduce a few different methods for high-speed imaging and their application to single-cell analysis for precision medicine and green energy.

Date and Time: 
Friday, January 27, 2017 - 4:30pm
Venue: 
Packard 101

Adversarial perceptual representation learning across diverse modalities and domains [SCIEN]

Topic: 
Adversarial perceptual representation learning across diverse modalities and domains
Abstract / Description: 

Learning of layered or "deep" representations has provided significant advances in computer vision in recent years, but has traditionally been limited to fully supervised settings with very large amounts of training data. New results in adversarial adaptive representation learning show how such methods can also excel when learning in sparse/weakly labeled settings across modalities and domains. I'll review state-of-the-art models for fully convolutional pixel-dense segmentation from weakly labeled input, and will discuss new methods for adapting models to new domains with few or no target labels for categories of interest. As time permits, I'll present recent long-term recurrent network models that learn cross-modal description and explanation, visuomotor robotic policies that adapt to new domains, and deep autonomous driving policies that can be learned from heterogeneous large-scale dashcam video datasets.

Date and Time: 
Wednesday, February 8, 2017 - 4:30pm
Venue: 
Packard 101

Adaptive optics retinal imaging: more than just high-resolution [SCIEN]

Topic: 
Adaptive optics retinal imaging: more than just high-resolution
Abstract / Description: 

The majority of the cells in the retina do not reproduce, making early diagnosing of eye disease paramount. Through improved resolution provided by the correction of the ocular monochromatic aberrations, adaptive optics combined with conventional and novel imaging techniques reveal pathology at the cellular-scale. When compared with existing clinical tools, the ability to visualize retinal cells and microscopic structures non-invasively represents a quantum leap in the potential for diagnosing and managing ocular, systemic and neurological diseases. The presentation will first cover the adaptive optics technology itself and some of its unique technical challenges. This will be followed by a review of AO-enhanced imaging modalities applied to the study of the healthy and diseased eye, with particular focus on multiple-scattering imaging to reveal transparent retinal structures.

Date and Time: 
Wednesday, January 18, 2017 - 4:30pm
Venue: 
Packard 101

Designing and assessing near-eye displays to increase user inclusivity [SCIEN Talk]

Topic: 
Designing and assessing near-eye displays to increase user inclusivity
Abstract / Description: 

Recent years have seen impressive growth in near-eye display systems, which are the basis of most virtual and augmented reality experiences. There are, however, a unique set of challenges to designing a display system that is literally strapped to the user's face. With an estimated half of all adults in the United States requiring some level of visual correction, maximizing inclusivity for near-eye displays is essential. I will describe work that combines principles from optics, optometry, and visual perception to identify and address major limitations of near-eye displays both for users with normal vision and those that require common corrective lenses. I will also describe ongoing work assessing the potential for near-eye displays to assist people with less common visual impairments at performing day-to-day tasks.

Date and Time: 
Wednesday, January 11, 2017 - 4:30pm to 5:15pm
Venue: 
Packard 101

SCIEN Talk: Electronic augmentation of body functions

Topic: 
Electronic augmentation of body functions: progress in electro-neural interfaces
Abstract / Description: 

Electrical nature of neural signaling allows efficient bi-directional electrical communication with the nervous system. Currently, electro-neural interfaces are utilized for partial restoration of sensory functions, such as hearing and sight, actuation of prosthetic limbs and restoration of tactile sensitivity, enhancement of tear secretion, and many others. Deep brain stimulation helps controlling tremor in patients with Parkinson's disease, improve muscle control in dystonia, and in other neurological disorders. With technological advances and progress in understanding of the neural systems, these interfaces may allow not only restoration or augmentation of the lost functions, but also expansion of our natural capabilities – sensory, cognitive and others. I will review the state of the field and future directions of technological development.

Date and Time: 
Tuesday, December 6, 2016 - 4:30pm to 5:30pm
Venue: 
Packard 101

SCIEN Talk: Towards Socially-aware AI

Topic: 
Towards Socially-aware AI
Abstract / Description: 

Over the past sixty years, Intelligent Machines have made great progress in playing games, tagging images in isolation, and recently making decisions for self-driving vehicles. Despite these advancements, they are still far from making decisions in social scenes and effectively assisting humans in public spaces such as terminals, malls, campuses, or any crowded urban environment. To overcome these limitations, I claim that we need to empower machines with social intelligence, i.e., the ability to get along well with others and facilitate mutual cooperation. This is crucial to design future generations of smart spaces that adapt to the behavior of humans for efficiency, or develop autonomous machines that assist in crowded public spaces (e.g., delivery robots, or self-navigating segways).

In this talk, I will present my work towards socially-aware machines that can understand human social dynamics and learn to forecast them. First, I will highlight the machine vision techniques behind understanding the behavior of more than 100 million individuals captured by multi-modal cameras in urban spaces. I will show how to use sparsity promoting priors to extract meaningful information about human behavior. Second, I will introduce a new deep learning method to forecast human social behavior. The causality behind human behavior is an interplay between both observable and non-observable cues (e.g., intentions). For instance, when humans walk into crowded urban environments such as a busy train terminal, they obey a large number of (unwritten) common sense rules and comply with social conventions. They typically avoid crossing groups and keep a personal distance to their surrounding. I will present detailed insights on how to learn these interactions from millions of trajectories. I will describe a new recurrent neural network that can jointly reason on correlated sequences and forecast human trajectories in crowded scenes. It opens new avenues of research in learning the causalities behind the world we observe. I will conclude my talk by mentioning some ongoing work in applying these techniques to social robots, and the future generations of smart hospitals.

More Information: http://web.stanford.edu/~alahi/

Date and Time: 
Tuesday, November 29, 2016 - 4:30pm to 5:30pm
Venue: 
Packard 101

SCIEN Talk: Designing a smart wearable camera for blind and visually impaired people

Topic: 
Designing a smart wearable camera for blind and visually impaired people
Abstract / Description: 

Horus Technology was founded in July 2014 with the goal of creating a smart wearable camera for blind and visually impaired people featuring intelligent algorithms that could understand the environment around its user and describe it out loud. Two years later, Horus has a working prototype being tested by a number of blind people in Europe and North America. Harnessing the power of portable GPUs, stereo vision and deep learning algorithms, Horus can read texts in different languages, learn and recognize faces, objects and identify obstacles. Designing a wearable device, we had to face a number of challenges and difficult choices. We will describe our systems, our design choices for both software and hardware and we will end with a small demo of Horus capabilities.

Date and Time: 
Tuesday, November 15, 2016 - 4:30pm to 5:30pm
Venue: 
Packard 101

SCIEN Talk

Topic: 
Quantum dot-based image sensors for cutting-edge commercial multispectral cameras
Abstract / Description: 

This work presents the development of a quantum dot-based photosensitive film engineered to be integrated on standard CMOS process wafers. It enables the design of exceptionally high performance, reliable image sensors. Quantum dot solids absorb light much more rapidly than typical silicon-based photodiodes do, and with the ability to tune the effective material bandgap, quantum dot-based imagers enable higher quantum efficiency over extended spectral bands, both in the Visible and IR regions of the spectrum. Moreover, a quantum dot-based image sensor enables desirable functions such as ultra-small pixels with low crosstalk, high full well capacity, global shutter and wide dynamic range at a relatively low manufacturing cost. At InVisage, we have optimized the manufacturing process flow and are now able to produce high-end image sensors for both visible and NIR in quantity.


 

The Stanford Center for Image Systems Engineering (SCIEN) is a partnership between the Stanford School of Engineering and technology companies developing imaging systems for the enhancement of human communication.

Date and Time: 
Wednesday, November 9, 2016 - 4:30pm to 5:15pm
Venue: 
Packard 101

Pages

Subscribe to RSS - SCIEN Talk