EE Student Information

The Department of Electrical Engineering supports Black Lives Matter. Read more.

• • • • •

EE Student Information, Spring Quarter through Academic Year 2020-2021: FAQs and Updated EE Course List.

Updates will be posted on this page, as well as emailed to the EE student mail list.

Please see Stanford University Health Alerts for course and travel updates.

As always, use your best judgement and consider your own and others' well-being at all times.


image of profs Wetzstein, Fan, Miller
December 2020

Professors Gordon Wetzstein, Shanhui Fan, and David A. B. Miller collaborated with faculty at several other institutions, to publish, "Inference in artificial intelligence with deep optics and photonics". 

Abstract: Artificial intelligence tasks across numerous applications require accelerators for fast and low-power execution. Optical computing systems may be able to meet these domain-specific needs but, despite half a century of research, general-purpose optical computing systems have yet to mature into a practical technology. Artificial intelligence inference, however, especially for visual computing applications, may offer opportunities for inference based on optical and photonic systems. In this Perspective, we review recent work on optical computing for artificial intelligence applications and discuss its promise and challenges.

Additional authors are Aydogan Ozcan, Sylvain Gigan, Dirk Englund, Marin Soljačić, Cornelia Denz, and Demetri Psaltis.



image of prof Amin Arbabian
December 2020

Professor Amin Arbabian, Aidan Fitzpatrick (PhD candidate), and Ajay Singhvi (PhD candidate) have developed an airborne method for imaging underwater objects by combining light and sound to break through the seemingly impassable barrier at the interface of air and water.

The researchers envision their hybrid optical-acoustic system one day being used to conduct drone-based biological marine surveys from the air, carry out large-scale aerial searches of sunken ships and planes, and map the ocean depths with a similar speed and level of detail as Earth's landscapes. Their "Photoacoustic Airborne Sonar System" is detailed in a recent study published in the journal IEEE Access.

"Airborne and spaceborne radar and laser-based, or LIDAR, systems have been able to map Earth's landscapes for decades. Radar signals are even able to penetrate cloud coverage and canopy coverage. However, seawater is much too absorptive for imaging into the water," reports Amin. "Our goal is to develop a more robust system which can image even through murky water."


Excerpted from "Stanford engineers combine light and sound to see underwater", Stanford News, November 30, 2020



image of prof Kwabena Boahen
November 2020

Professor Kwabena Boahen builds highly efficient "neuromorphic" supercomputers modeled on the human brain.

He hopes they will drive the artificial intelligence future. He uses an analogy when describing the goal of his work: "It's LA versus Manhattan."

Kwabena means structurally. Today's chips are two dimensional — flat and spread out, like LA. Tomorrow's chips will be stacked, like the floors of the skyscrapers on a New York block. In this analogy, the humans are the electrons shuffling data back and forth. The shorter distances they have to travel to work, and the more they can accomplish before traveling home, will drive profound leaps in energy efficiency. The consequences could not be greater. Kwabena says that the lean chips he imagines could prove tens-of-thousands times less expensive to operate than today's power hogs.

To learn how it works, listen in as Kwabena Boahen describes neuromorphic computing to fellow bioengineer Russ Altman in a recent episode of Stanford Engineering's The Future of Everything podcast.


Excerpted from Stanford Engineering's Research & Ideas

image of prof James Zou
November 2020

Professor James Zou, says that as algorithms compete for clicks and the associated user data, they become more specialized for subpopulations that gravitate to their sites. This can have serious implications for both companies and consumers.

This is described in a paper "Competing AI: How does competition feedback affect machine learning?", written by Antonio Ginart (EE PhD candidate), Eva Zhang, and professor James Zou.

James' team recognized that there's a feedback dynamic at play if companies' machine learning algorithms are competing for users or customers and at the same time using customer data to train their model. "By winning customers, they're getting a new set of data from those customers, and then by updating their models on this new set of data, they're actually then changing the model and biasing it toward the new customers they've won over," says Antonio Ginart.

In terms of next steps, the team is looking at the effect that buying datasets (rather than collecting data only from customers) might have on algorithmic competition. James is also interested in identifying some prescriptive solutions that his team can recommend to policymakers or individual companies. "What do we do to reduce these kinds of biases now that we have identified the problem?" he says.

"This is still very new and quite cutting-edge work," James says. "I hope this paper sparks researchers to study competition between AI algorithms, as well as the social impact of that competition."


Excerpted from "When Algorithms Compete, Who Wins?"

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition.

image of prof. Chelsea Finn
November 2020

Congratulations to Professor Chelsea Finn. She has been awarded an inaugural Samsung AI Researcher of the Year award. Presented at Samsung AI Forum 2020, the five recipients are AI researchers from around the world.

At the event, Chelsea's lecture was titled, "From Few-Shot Adaptation to Uncovering Symmetries". In her lecture, she introduced meta learning technologies in which AI, in spite of changes in data, can adapt swiftly to untrained data, and proceeded to share success stories of the application of these technologies in the areas of robotics and new drug candidate material design.

Chelsea's research interests lie in the ability to enable robots and other agents to develop broadly intelligent behavior through learning and interaction. Her work lies at the intersection of machine learning and robotic control, including topics such as end-to-end learning of visual perception and robotic manipulation skills, deep reinforcement learning of general skills from autonomously collected experience, and meta-learning algorithms that can enable fast learning of new concepts and behaviors.

Please join us in congratulating Chelsea on this well-deserved distinction! Additional awards went to Prof. Kyunghyun Cho (New York University), Prof. Seth Flaxman (Imperial College London), Prof. Jiajun Wu (Stanford), and Prof. Cho-Jui Hsieh (UCLA).

Excerpted from Samsung Newsroom, "[Samsung AI Forum 2020] Day 1: How AI Can Make a Meaningful Impact on Real World Issues"


Related News

image of prof. Dorsa Sadigh
November 2020

Professor Dorsa Sadigh and her team have integrated algorithms in a novel way that makes controlling assistive robotic arms faster and easier. The team hopes their research will enable people with disabilities to conduct everyday tasks on their own– for example, cooking and eating.

Dorsa's team, which included engineering graduate student Hong Jun Jeon and computer science postdoctoral scholar Dylan P. Losey, developed a controller that blends two artificial intelligence algorithms. The first, which was developed by Dorsa's group, enables control in two dimensions on a joystick without the need to switch between modes. It uses contextual cues to determine whether a user is reaching for a doorknob or a drinking cup, for example. Then, as the robot arm nears its destination, the second algorithm kicks in to allow more precise movements, with control shared between the human and the robot.

In shared autonomy, the robot begins with a set of "beliefs" about what the controller is telling it to do and gains confidence about the goal as additional instructions are given. Since robots aren't actually sentient, these beliefs are really just probabilities. For example, faced with two cups of water, a robot might begin with a belief that there's an even chance it should pick up either one. But as the joystick directs it toward one cup and away from the other, the robot gains confidence about the goal and can begin to take over – sharing autonomy with the user to more precisely control the robot arm. The amount of control the robot takes on is probabilistic as well: If the robot has 80 percent confidence that it's going to cup A rather than cup B, it will take 80 percent of the control while the human still has 20 percent, explains Professor Dorsa Sadigh.


Excerpted from HAI (Human-Centered Artificial Intelligence), "Assistive Feeding: AI Improves Control of Robot Arms"

Video, "Shared Autonomy with Learned Latent Actions"

image of PhD candidate Pin Pin Tea-makorn
October 2020

PhD candidate Pin Pin Tea-makorn and Prof. Michal Kosinski have been seeking evidence to support the question of whether the faces of people in long-term relationships start to look the same over time. Their recently published article, "Spouses' faces are similar but do not become more similar with time" provides the answer in the title.

"It is something people believe in and we were curious about it," said Pin Pin Tea-makorn, an EE PhD candidate. "Our initial thought was if people's faces do converge over time, we could look at what types of features they converge on."

Pin Pin collected and analyzed thousands of public photos of couples. From these she compiled a database of pictures from 517 couples, taken within two years of tying the knot and between 20 and 69 years later.

The study has highlighted the importance of going back through past studies and checking their validity. "This is definitely something the field needs to update," said Kosinski. "One of the major problems in social sciences is the pressure to come up with novel, amazing, newsworthy theories. This is how you get published, hired, and tenured. As a result the field is filled with concepts and theories that are reclaimed, over-hyped, or not validated properly."

Kosinski praised Pin Pin for taking on the project, as he said many scientists were reluctant to "rock the boat" and reveal potential flaws in other researchers' work. "Cleaning up the field might be the most important challenge faced by social scientists today, yet she is surely not going to get as many citations or as much recognition for her work as she would get if she came up with something new and flashy," he said.

One of the researchers' next projects is to investigate claims that people's names can be predicted with any accuracy from their faces alone. "We're sceptical," Kosinski said.


Excerpted from The Guardian, Science, "Researchers crack question of whether couples start looking alike", October 2020



Pin Pin's research involves computational psychology, focusing on using facial recognition systems to study interpersonal relationships. Pin Pin is EE's graduate student advisor.

image of prof. Andrea Montanari
October 2020

Professor Andrea Montanari, along with researchers from other institutions, have launched their first project: the Collaboration on the Theoretical Foundations of Deep Learning. The project is led by UC Berkeley researchers and has received five years of funding from NSF and Simons Foundation.

The project aims to gain a theoretical understanding of deep learning, which is making significant impacts across industry, commerce, science, and society.

Although deep learning is a widely used artificial intelligence approach for teaching computers to learn from data, its theoretical foundations are poorly understood, a challenge that the project will address. Understanding the mechanisms that underpin the practical success of deep learning will allow researchers to address its limitations, including its sensitivity to data manipulation.

The other institutions include UC Berkeley, the Massachusetts Institute of Technology, UC Irvine, UC San Diego, Toyota Technological Institute at Chicago, EPFL in Lausanne, Switzerland, and the Hebrew University in Jerusalem.

Professor Andrea Montanari's research spans several disciplines including statistics, computer science, information theory, and machine learning.


Excerpted from "UC Berkeley to lead $10M NSF/Simons Foundation program to investigate theoretical underpinnings of deep learning", August 2020


Related News


image of Cindy Nguyen (PhD candidate), Prof. Tsachy Weissman, and Suzanne Sims
September 2020

In July and August, Professor Tsachy Weissman and the Stanford Compression Forum hosted the 2020 STEM to SHTEM (Science, Humanities, Technology, Engineering and Mathematics) internship program for high schoolers.

The summer program welcomed 64 high school students. The students were matched with one of nineteen projects ranging from financial exchanges to narratives of science and social justice – a full list follows. Each of the project groups were supervised by mentors from the Compression Forum.

The 8-week STEM to SHTEM Program culminates in final reports that often weave an entirely new perspective. As a team, the students' interests and knowledge are combined with traditional research methodology. Several mentors provide guidance during the experience and encourage exploration of the interns' strengths and interests.

Special thanks to program coordinators Cindy Nguyen and Suzanne Sims.

Congratulations to all the 2020 STEM to SHTEM Program interns! We enjoyed working with you and look forward to hearing from you in the future.

Students' final reports describe new insights and broaden knowledge of the topics. A few takeaways from the 2020 projects include,

  • the use of animation to improve the quality and efficiency of video communication;
  • theatrical performance as technology and a pandemic create new boundaries;
  • how might today's "science" and world be different If history had been more inclusive of the sciences that exist but aren't well-known?

Complete list of projects from STEM to SHTEM Program. Source:[...]journal-for-high-schoolers-2020

1. Applications of Astrophysics to Multimedia Art-Making In Parallel to Narratives of Science and Social Justice
2. Artificial Neural Networks with Edge-Based Architecture
3. COVerage: Region-Specific SARS-CoV-2 News Query Algorithm
4. Developing and Testing New Montage Methods in Electroencephalography
5. Fundamental Differences Between The Driving Patterns of Humans and Autonomous Vehicles
6. Identifying and Quantifying Differences Among SARS-CoV-2 Genomes Using K-mer Analysis
7. Improving the Infrastructure of a Financial Exchange System in the Cloud
8. Journal for High Schoolers in 2020
9. Keypoint-Centric Video Processing for Reducing Net Latency in Video Streaming
10. Olfaction Communication System
11. Optimizing the Measurement of SPO2 With a Miniaturized Forehead Sensor
12. Properties and effects of ion implantation into silicon and wide bandgap materials
13. ProtographLDPC: Implementation of Protograph LDPC error correction codes
14. RF/mm-Wave Semiconductor Technology for 5G Applications and Beyond
15. The Price of Latency in Financial Exchanges
16. Understanding COVID-19 Through Sentiment Analysis on Twitter and Economic Data
17. Virtual Reality for Emotional Response
18. Vision-Based Robotic Object Manipulation: Using a Human-Mimicking Hand Design with Pure
19. Object Recognition Algorithms to Intelligently Grasp Complex Items
20. YOU ARE HERE (AND HERE AND THERE): A Virtual Extension of Theatre

Summer 2021 application notification

image of prof Gordon Wetzstein and EE PhD candidate David Lindell
September 2020

Professor Gordon Wetzstein and EE PhD candidate David Lindell, have created a system that reconstructs shapes obscured by 1-inch-thick foam. Their tests are detailed in, "Three-dimensional imaging through scattering media based on confocal diffuse tomography", published in Nature Communications.

Gordon Wetzstein reports, "A lot of imaging techniques make images look a little bit better, a little bit less noisy, but this is really something where we make the invisible visible. This is really pushing the frontier of what may be possible with any kind of sensing system. It's like superhuman vision."

"We were interested in being able to image through scattering media without these assumptions and to collect all the photons that have been scattered to reconstruct the image," said David Lindell, EE PhD candidate and lead author of the paper. "This makes our system especially useful for large-scale applications, where there would be very few ballistic photons."

In order to make their algorithm amenable to the complexities of scattering, the researchers had to closely co-design their hardware and software, although the hardware components they used are only slightly more advanced than what is currently found in autonomous cars. Depending on the brightness of the hidden objects, scanning in their tests took anywhere from one minute to one hour, but the algorithm reconstructed the obscured scene in real-time and could be run on a laptop.

"You couldn't see through the foam with your own eyes, and even just looking at the photon measurements from the detector, you really don't see anything," said David. "But, with just a handful of photons, the reconstruction algorithm can expose these objects – and you can see not only what they look like, but where they are in 3D space."

Excerpted from Stanford News, "Stanford researchers devise way to see through clouds and fog", September 2020.

Related News


Subscribe to RSS - 2020