EE Student Information

The Department of Electrical Engineering supports Black Lives Matter. Read more.

• • • • •

EE Student Information, Spring Quarter through Academic Year 2020-2021: FAQs and Updated EE Course List.

Updates will be posted on this page, as well as emailed to the EE student mail list.

Please see Stanford University Health Alerts for course and travel updates.

As always, use your best judgement and consider your own and others' well-being at all times.

SCIEN Talk

Monitorless Workspaces and Operating Rooms of the Future: Virtual/Augmented Reality through Multiharmonic Lock-In Amplifiers [SCIEN Talk]

Topic: 
Monitorless Workspaces and Operating Rooms of the Future: Virtual/Augmented Reality through Multiharmonic Lock-In Amplifiers
Abstract / Description: 

In my childhood I invented a new kind of lock-in amplifier and used it as the basis for the world's first wearable augmented reality computer (http://wearcam.org/par). This allowed me to see radio waves, sound waves, and electrical signals inside the human body, all aligned perfectly with the physical space in which they were present. I built this equipment into special electric eyeglasses that automatically adjusted their convergence and focus to match their surroundings. By shearing the spacetime continuum one sees a stroboscopic vision in coordinates in which the speed of light, sound, or wave propagation is exactly zero (http://wearcam.org/kineveillance.pdf), or slowed down, making these signals visible to radio engineers, sound engineers, neurosurgeons, and the like. See the attached picture of a violin attached to the desk in my office at Meta, where we're creating the future of computing based on Human-in-the-Loop Intelligence (https://en.wikipedia.org/wiki/Humanistic_intelligence).

 

More Information: http://weartech.com/bio.htm

Date and Time: 
Wednesday, April 19, 2017 - 4:30pm
Venue: 
Packard 101

Capturing the “Invisible”: Computational Imaging for Robust Sensing and Vision [SCIEN]

Topic: 
Capturing the “Invisible”: Computational Imaging for Robust Sensing and Vision
Abstract / Description: 

Imaging has become an essential part of how we communicate with each other, how autonomous agents sense the world and act independently, and how we research chemical reactions and biological processes. Today's imaging and computer vision systems, however, often fail in critical scenarios, for example in low light or in fog. This is due to ambiguity in the captured images, introduced partly by imperfect capture systems, such as cellphone optics and sensors, and partly present in the signal before measuring, such as photon shot noise. This ambiguity makes imaging with conventional cameras challenging, e.g. low-light cellphone imaging, and it makes high-level computer vision tasks difficult, such as scene segmentation and understanding.

In this talk, I will present several examples of algorithms that computationally resolve this ambiguity and make sensing and vision systems robust. These methods rely on three key ingredients: accurate probabilistic forward models, learned priors, and efficient large-scale optimization methods. In particular, I will show how to achieve better low-light imaging using cell-phones (beating Google's HDR+), and how to classify images at 3 lux (substantially outperforming very deep convolutional networks, such as the Inception-v4 architecture). Using a similar methodology, I will discuss ways to miniaturize existing camera systems by designing ultra-thin, focus-tunable diffractive optics. Finally, I will present new exotic imaging modalities which enable new applications at the forefront of vision and imaging, such as seeing through scattering media and imaging objects outside direct line of sight.

Date and Time: 
Wednesday, April 12, 2017 - 4:30pm
Venue: 
Packard 101

Workshop on Augmented and Mixed Reality [SCIEN]

Topic: 
Workshop on Augmented and Mixed Reality
Abstract / Description: 

This workshop will bring together scientists and engineers who are advancing sensor technologies, computer vision, machine learning, head-mounted displays and our understanding of human vision, and developers who are creating new and novel applications for augmented and mixed reality in retail, education, science and medicine.

Date and Time: 
Thursday, May 11, 2017 (All day)
Venue: 
Tressider Union

Practical Computer Vision for Self-Driving Cars [SCIEN]

Topic: 
Practical Computer Vision for Self-Driving Cars
Abstract / Description: 

Cruise is developing and testing a fleet of self driving cars on the streets of San Francisco. Getting these cars to drive is a hard engineering and science problem - this talk explains roughly how self driving cars work and how computer vision, from camera hardware to deep learning, helps make a self driving car go.

More Information: https://www.getcruise.com/
see also http://www.theverge.com/2017/1/19/14327954/gm-self-driving-car-cruise-chevy-bolt-video

Date and Time: 
Wednesday, April 5, 2017 - 4:30pm
Venue: 
Packard 101

New Directions in Management Science & Engineering: A Brief History of the Virtual Lab

Topic: 
New Directions in Management Science & Engineering: A Brief History of the Virtual Lab
Abstract / Description: 

Lab experiments have long played an important role in behavioral science, in part because they allow for carefully designed tests of theory, and in part because randomized assignment facilitates identification of causal effects. At the same time, lab experiments have traditionally suffered from numerous constraints (e.g. short duration, small-scale, unrepresentative subjects, simplistic design, etc.) that limit their external validity. In this talk I describe how the web in general—and crowdsourcing sites like Amazon's Mechanical Turk in particular—allow researchers to create "virtual labs" in which they can conduct behavioral experiments of a scale, duration, and realism that far exceed what is possible in physical labs. To illustrate, I describe some recent experiments that showcase the advantages of virtual labs, as well as some of the limitations. I then discuss how this relatively new experimental capability may unfold in the future, along with some implications for social and behavioral science.

Date and Time: 
Thursday, March 16, 2017 - 12:15pm
Venue: 
Packard 101

ARRIScope - A new era in surgical microscopy [SCIEN]

Topic: 
ARRIScope - A new era in surgical microscopy
Abstract / Description: 

The continuous increase in performance and the versatility of ARRI´s digital motion picture camera systems led to our development of the first fully digital stereoscopic operating microscope, the ARRISCOPE. For the last 18 months' multiple units have been used in clinical trials at renowned clinics in the field of Otology in Germany.
During our presentation we will cover the obstacles, initial applications and future potentials of 3D camera based surgical microscopes and give an insight into the technical preconditions and advantages of the digital imaging chain. In conclusion of the presentation, examples of different surgical procedures recorded with the ARRISCOPE and near future augmented and virtual reality 3D applications will be demonstrated.

More Information: http://www.arrimedical.com/

 

Date and Time: 
Wednesday, March 22, 2017 - 4:30pm
Venue: 
Packard 101

Breaking the Barriers to True Augmented Reality [SCIEN]

Topic: 
Breaking the Barriers to True Augmented Reality
Abstract / Description: 

In 1950, Alan Turing introduced the Turing Test, an essential concept in the philosophy of Artificial Intelligence (AI). He proposed an "imitation game" to test the sophistication of an AI software. Similar tests have been suggested for fields including Computer Graphics and Visual Computing. In this talk, we will propose an Augmented Reality Turing Test (ARTT).

Augmented Reality (AR) embeds spatially-registered computer graphics in the user's view in realtime. This capability can be used for a lot of purposes; for example, AR hands can demonstrate manual repair steps to a mechanic. To pass the ARTT, we must create AR objects that are indistinguishable from real objects. Ray Kurzweil bet USD 20,000 that the Turing Test will be passed by 2029. We think that the ARTT can be passed significantly earlier.

We will discuss the grand challenges for passing the ARTT, including: calibration, localization & tracking, modeling, rendering, display technology, and multimodal AR. We will also show examples from our previous and current work at Nara Institute of Science and Technology in Japan.

Date and Time: 
Tuesday, March 14, 2017 - 4:30pm
Venue: 
Clark Center Auditorium

The AR/VR Renaissance: promises, disappointments, unsolved problems [SCIEN]

Topic: 
The AR/VR Renaissance: promises, disappointments, unsolved problems
Abstract / Description: 

Augmented and Virtual Reality have been hailed as "the next big thing" several times in the past 25 years. Some are predicting that VR will be the next computing platform, or at least the next platform for social media. Others worry that today's VR systems are closer to the 1990s Apple Newton than the 2007 Apple iPhone. This talk will feature a short, personal history of AR and VR, a survey of some of current work, sample applications, and remaining problems. Current work with encouraging results include 3D acquisition of dynamic, populated spaces; compact and wide field-of-view AR displays; low-latency and high-dynamic range AR display systems; and AR lightfield displays that may reduce the accommodation-vergence conflict.

More information: http://henryfuchs.web.unc.edu/

Date and Time: 
Wednesday, March 1, 2017 - 4:30pm
Venue: 
Packard 101

First-Photon Imaging and Other Imaging with Few Photons [SCIEN]

Topic: 
First-Photon Imaging and Other Imaging with Few Photons
Abstract / Description: 

LIDAR systems use single-photon detectors to enable long-range reflectivity and depth imaging. By exploiting an inhomoheneous Poisson process observation model and the typical structure of natural scenes, first-photon imaging demonstrates the possibility of accurate LIDAR with only 1 detected photon per pixel, where half of the detections are due to (uninformative) ambient light. I will explain the simple ideas behind first-photon imaging. Then I will touch upon related subsequent works that mitigate the limitations of detector arrays, withstand 25-times more ambient light, allow for unknown ambient light levels, and capture multiple depths per pixel.

Date and Time: 
Wednesday, February 22, 2017 - 4:30pm
Venue: 
Packard 101

Pages

Subscribe to RSS - SCIEN Talk