EE Student Information

The Department of Electrical Engineering supports Black Lives Matter. Read more.

• • • • •

EE Student Information, Spring Quarter through Academic Year 2020-2021: FAQs and Updated EE Course List.

Updates will be posted on this page, as well as emailed to the EE student mail list.

Please see Stanford University Health Alerts for course and travel updates.

As always, use your best judgement and consider your own and others' well-being at all times.

research

image of 3 EE faculty: Subhasish Mitra, Mary Wootters, and H.S. Philip Wong
January 2021

Professors Subhasish Mitra, H.S. Philip Wong, Mary Wootters, and their students recently published "Illusion of large on-chip memory by networked computing chips for neural network inference", in Nature.

Smartwatches and other battery-powered electronics would be even smarter if they could run AI algorithms. But efforts to build AI-capable chips for mobile devices have so far hit a wall – the so-called "memory wall" that separates data processing and memory chips that must work together to meet the massive and continually growing computational demands imposed by AI.

"Transactions between processors and memory can consume 95 percent of the energy needed to do machine learning and AI, and that severely limits battery life," said Professor Subhasish Mitra.

The team has designed a system that can run AI tasks faster, and with less energy, by harnessing eight hybrid chips, each with its own data processor built right next to its own memory storage.

This paper builds on their prior development of a new memory technology, called RRAM, that stores data even when power is switched off – like flash memory – only faster and more energy efficiently. Their RRAM advance enabled the Stanford researchers to develop an earlier generation of hybrid chips that worked alone. Their latest design incorporates a critical new element: algorithms that meld the eight, separate hybrid chips into one energy-efficient AI-processing engine.

Additional authors are Robert M. Radway, Andrew Bartolo, Paul C. Jolly, Zainab F. Khan, Binh Q. Le, Pulkit Tandon, Tony F. Wu, Yunfeng Xin, Elisa Vianello, Pascal Vivet, Etienne Nowak, Mohamed M. Sabry Aly, and Edith Beigne.

[Excerpted from "Stanford researchers combine processors and memory on multiple hybrid chips to run AI on battery-powered smart devices"]

image of prof H. Tom Soh
January 2021

EE Professor Tom Soh, in collaboration with Professor Eric Appel, and colleagues have developed a technology that can provide real time diagnostic information. Their device, which they've dubbed the "Real-time ELISA," is able to perform many blood tests very quickly and then stitch the individual results together to enable continuous, real-time monitoring of a patient's blood chemistry. Instead of a snapshot, the researchers end up with something more like a movie.

"A blood test is great, but it can't tell you, for example, whether insulin or glucose levels are increasing or decreasing in a patient," said Professor Tom Soh. "Knowing the direction of change is important."

In their recent study, "A fluorescence sandwich immunoassay for the real-time continuous detection of glucose and insulin in live animals", published in the journal Nature Biomedical Engineering, the researchers used the device to simultaneously detect insulin and glucose levels in living diabetic laboratory rats. But the researchers say their tool is capable of so much more because it can be easily modified to monitor virtually any protein or disease biomarker of interest.

Authors are PhD candidates Mahla Poudineh, Caitlin L. Maikawa, Eric Yue Ma, Jing Pan, Dan Mamerow, Yan Hang, Sam W. Baker, Ahmad Beirami, Alex Yoshikawa, researcher Michael Eisenstein, Professor Seung Kim, and Professor Jelena Vučković.

Technologically, the system relies upon an existing technology called Enzyme-linked Immunosorbent Assay – ELISA ("ee-LYZ-ah") for short. ELISA has been the "gold standard" of biomolecular detection since the early 1970s and can identify virtually any peptide, protein, antibody or hormone in the blood. An ELISA assay is good at identifying allergies, for instance. It is also used to spot viruses like HIV, West Nile and the SARS-CoV-2 coronavirus that causes COVID-19.

The Real-time ELISA is essentially an entire lab within a chip with tiny pipes and valves no wider than a human hair. An intravenous needle directs blood from the patient into the device's tiny circuits where ELISA is performed over and over.

 Excerpted from "Stanford researchers develop lab-on-a-chip that turns blood test snapshots into continuous movies", December 21, 2020.

Related News

image of prof Stephen P. Boyd
January 2021

The Boyd group's CVXGEN software has been used in all SpaceX Falcon 9 first stage landings.  

From spacex.com: Falcon 9 is a reusable, two-stage rocket designed and manufactured by SpaceX for the reliable and safe transport of people and payloads into Earth orbit and beyond. Falcon 9 is the world's first orbital class reusable rocket. Reusability allows SpaceX to refly the most expensive parts of the rocket, which in turn drives down the cost of space access.

On December 9, Starship serial number 8 (SN8) lifted off from a Cameron County launch pad and successfully ascended, transitioned propellant, and performed its landing flip maneuver with precise flap control to reach its landing point. Low pressure in the fuel header tank during the landing burn led to high touchdown velocity resulting in a hard (and exciting!) landing. Re-watch SN8's flight here

 

Although Stephen doesn't plan to travel to Mars, he's thrilled that one day, some of his and his students' work will.

image of profs Wetzstein, Fan, Miller
December 2020

Professors Gordon Wetzstein, Shanhui Fan, and David A. B. Miller collaborated with faculty at several other institutions, to publish, "Inference in artificial intelligence with deep optics and photonics". 

Abstract: Artificial intelligence tasks across numerous applications require accelerators for fast and low-power execution. Optical computing systems may be able to meet these domain-specific needs but, despite half a century of research, general-purpose optical computing systems have yet to mature into a practical technology. Artificial intelligence inference, however, especially for visual computing applications, may offer opportunities for inference based on optical and photonic systems. In this Perspective, we review recent work on optical computing for artificial intelligence applications and discuss its promise and challenges.

Additional authors are Aydogan Ozcan, Sylvain Gigan, Dirk Englund, Marin Soljačić, Cornelia Denz, and Demetri Psaltis.

 

Related

image of prof Amin Arbabian
December 2020

Professor Amin Arbabian, Aidan Fitzpatrick (PhD candidate), and Ajay Singhvi (PhD candidate) have developed an airborne method for imaging underwater objects by combining light and sound to break through the seemingly impassable barrier at the interface of air and water.

The researchers envision their hybrid optical-acoustic system one day being used to conduct drone-based biological marine surveys from the air, carry out large-scale aerial searches of sunken ships and planes, and map the ocean depths with a similar speed and level of detail as Earth's landscapes. Their "Photoacoustic Airborne Sonar System" is detailed in a recent study published in the journal IEEE Access.

"Airborne and spaceborne radar and laser-based, or LIDAR, systems have been able to map Earth's landscapes for decades. Radar signals are even able to penetrate cloud coverage and canopy coverage. However, seawater is much too absorptive for imaging into the water," reports Amin. "Our goal is to develop a more robust system which can image even through murky water."

 

Excerpted from "Stanford engineers combine light and sound to see underwater", Stanford News, November 30, 2020

 

Related

image of Grayson Zulof, PhD and Thaibao Peter Phan, PhD
November 2020

Congratulations to Thaibao Phan (PhD candidate) and Grayson Zulauf (PhD '20)! Their paper was one of three selected to receive the Best Paper Award at the Control and Modeling in Power Electronics (COMPEL) Workshop 2020.

Grayson Zulauf (PhD '20) was a member of professor Juan Rivas-Davila's SUPER Lab, and Thaibao Phan (PhD candidate) is a member of professor Jonathan Fan's Fan Lab.

 

Their collaborative paper, "1 kW, Multi-MHz Wireless Charging for Electric Transportation" will be published on IEEExPLORE in the upcoming weeks.

 

Please join us in congratulating them on this wonderful accomplishment!

image of prof Kwabena Boahen
November 2020

Professor Kwabena Boahen builds highly efficient "neuromorphic" supercomputers modeled on the human brain.

He hopes they will drive the artificial intelligence future. He uses an analogy when describing the goal of his work: "It's LA versus Manhattan."

Kwabena means structurally. Today's chips are two dimensional — flat and spread out, like LA. Tomorrow's chips will be stacked, like the floors of the skyscrapers on a New York block. In this analogy, the humans are the electrons shuffling data back and forth. The shorter distances they have to travel to work, and the more they can accomplish before traveling home, will drive profound leaps in energy efficiency. The consequences could not be greater. Kwabena says that the lean chips he imagines could prove tens-of-thousands times less expensive to operate than today's power hogs.

To learn how it works, listen in as Kwabena Boahen describes neuromorphic computing to fellow bioengineer Russ Altman in a recent episode of Stanford Engineering's The Future of Everything podcast.

 

Excerpted from Stanford Engineering's Research & Ideas

image of prof James Zou
November 2020

Professor James Zou, says that as algorithms compete for clicks and the associated user data, they become more specialized for subpopulations that gravitate to their sites. This can have serious implications for both companies and consumers.

This is described in a paper "Competing AI: How does competition feedback affect machine learning?", written by Antonio Ginart (EE PhD candidate), Eva Zhang, and professor James Zou.

James' team recognized that there's a feedback dynamic at play if companies' machine learning algorithms are competing for users or customers and at the same time using customer data to train their model. "By winning customers, they're getting a new set of data from those customers, and then by updating their models on this new set of data, they're actually then changing the model and biasing it toward the new customers they've won over," says Antonio Ginart.

In terms of next steps, the team is looking at the effect that buying datasets (rather than collecting data only from customers) might have on algorithmic competition. James is also interested in identifying some prescriptive solutions that his team can recommend to policymakers or individual companies. "What do we do to reduce these kinds of biases now that we have identified the problem?" he says.

"This is still very new and quite cutting-edge work," James says. "I hope this paper sparks researchers to study competition between AI algorithms, as well as the social impact of that competition."


 

Excerpted from "When Algorithms Compete, Who Wins?"

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition.

image of prof. Dorsa Sadigh
November 2020

Professor Dorsa Sadigh and her team have integrated algorithms in a novel way that makes controlling assistive robotic arms faster and easier. The team hopes their research will enable people with disabilities to conduct everyday tasks on their own– for example, cooking and eating.

Dorsa's team, which included engineering graduate student Hong Jun Jeon and computer science postdoctoral scholar Dylan P. Losey, developed a controller that blends two artificial intelligence algorithms. The first, which was developed by Dorsa's group, enables control in two dimensions on a joystick without the need to switch between modes. It uses contextual cues to determine whether a user is reaching for a doorknob or a drinking cup, for example. Then, as the robot arm nears its destination, the second algorithm kicks in to allow more precise movements, with control shared between the human and the robot.

In shared autonomy, the robot begins with a set of "beliefs" about what the controller is telling it to do and gains confidence about the goal as additional instructions are given. Since robots aren't actually sentient, these beliefs are really just probabilities. For example, faced with two cups of water, a robot might begin with a belief that there's an even chance it should pick up either one. But as the joystick directs it toward one cup and away from the other, the robot gains confidence about the goal and can begin to take over – sharing autonomy with the user to more precisely control the robot arm. The amount of control the robot takes on is probabilistic as well: If the robot has 80 percent confidence that it's going to cup A rather than cup B, it will take 80 percent of the control while the human still has 20 percent, explains Professor Dorsa Sadigh.

 

Excerpted from HAI (Human-Centered Artificial Intelligence), "Assistive Feeding: AI Improves Control of Robot Arms"

Video, "Shared Autonomy with Learned Latent Actions"

image of PhD candidate Pin Pin Tea-makorn
October 2020

PhD candidate Pin Pin Tea-makorn and Prof. Michal Kosinski have been seeking evidence to support the question of whether the faces of people in long-term relationships start to look the same over time. Their recently published article, "Spouses' faces are similar but do not become more similar with time" provides the answer in the title.

"It is something people believe in and we were curious about it," said Pin Pin Tea-makorn, an EE PhD candidate. "Our initial thought was if people's faces do converge over time, we could look at what types of features they converge on."

Pin Pin collected and analyzed thousands of public photos of couples. From these she compiled a database of pictures from 517 couples, taken within two years of tying the knot and between 20 and 69 years later.

The study has highlighted the importance of going back through past studies and checking their validity. "This is definitely something the field needs to update," said Kosinski. "One of the major problems in social sciences is the pressure to come up with novel, amazing, newsworthy theories. This is how you get published, hired, and tenured. As a result the field is filled with concepts and theories that are reclaimed, over-hyped, or not validated properly."

Kosinski praised Pin Pin for taking on the project, as he said many scientists were reluctant to "rock the boat" and reveal potential flaws in other researchers' work. "Cleaning up the field might be the most important challenge faced by social scientists today, yet she is surely not going to get as many citations or as much recognition for her work as she would get if she came up with something new and flashy," he said.

One of the researchers' next projects is to investigate claims that people's names can be predicted with any accuracy from their faces alone. "We're sceptical," Kosinski said.

 

Excerpted from The Guardian, Science, "Researchers crack question of whether couples start looking alike", October 2020

 

 

Pin Pin's research involves computational psychology, focusing on using facial recognition systems to study interpersonal relationships. Pin Pin is EE's graduate student advisor.

Pages

Subscribe to RSS - research