EE Student Information

The Department of Electrical Engineering supports Black Lives Matter. Read more.

• • • • •

EE Student Information, Spring Quarter through Academic Year 2020-2021: FAQs and Updated EE Course List.

Updates will be posted on this page, as well as emailed to the EE student mail list.

Please see Stanford University Health Alerts for course and travel updates.

As always, use your best judgement and consider your own and others' well-being at all times.

Faculty

image of prof emeritus Martin E. Hellman
January 2021

Congratulations to Professor Emeritus Martin Hellman. He has been selected as a 2020 Association for Computing Machinery (ACM) Fellow.

The ACM Fellows program recognizes the top 1% of ACM Members for their outstanding accomplishments in computing and information technology and/or outstanding service to ACM and the larger computing community. Fellows are nominated by their peers, with nominations reviewed by a distinguished selection committee.

"This year our task in selecting the 2020 Fellows was a little more challenging, as we had a record number of nominations from around the world," explained ACM President Gabriele Kotsis. "The 2020 ACM Fellows have demonstrated excellence across many disciplines of computing. These men and women have made pivotal contributions to technologies that are transforming whole industries, as well as our personal lives. We fully expect that these new ACM Fellows will continue in the vanguard in their respective fields."

 

Excerpted from ACM.org's "2020 ACM Fellows Recognized for Work that Underpins Today's Computing Innovations".

 

Please join us in congratulating Marty for this well-deserved recognition.

 

Related News

 

image of prof H. Tom Soh
January 2021

EE Professor Tom Soh, in collaboration with Professor Eric Appel, and colleagues have developed a technology that can provide real time diagnostic information. Their device, which they've dubbed the "Real-time ELISA," is able to perform many blood tests very quickly and then stitch the individual results together to enable continuous, real-time monitoring of a patient's blood chemistry. Instead of a snapshot, the researchers end up with something more like a movie.

"A blood test is great, but it can't tell you, for example, whether insulin or glucose levels are increasing or decreasing in a patient," said Professor Tom Soh. "Knowing the direction of change is important."

In their recent study, "A fluorescence sandwich immunoassay for the real-time continuous detection of glucose and insulin in live animals", published in the journal Nature Biomedical Engineering, the researchers used the device to simultaneously detect insulin and glucose levels in living diabetic laboratory rats. But the researchers say their tool is capable of so much more because it can be easily modified to monitor virtually any protein or disease biomarker of interest.

Authors are PhD candidates Mahla Poudineh, Caitlin L. Maikawa, Eric Yue Ma, Jing Pan, Dan Mamerow, Yan Hang, Sam W. Baker, Ahmad Beirami, Alex Yoshikawa, researcher Michael Eisenstein, Professor Seung Kim, and Professor Jelena Vučković.

Technologically, the system relies upon an existing technology called Enzyme-linked Immunosorbent Assay – ELISA ("ee-LYZ-ah") for short. ELISA has been the "gold standard" of biomolecular detection since the early 1970s and can identify virtually any peptide, protein, antibody or hormone in the blood. An ELISA assay is good at identifying allergies, for instance. It is also used to spot viruses like HIV, West Nile and the SARS-CoV-2 coronavirus that causes COVID-19.

The Real-time ELISA is essentially an entire lab within a chip with tiny pipes and valves no wider than a human hair. An intravenous needle directs blood from the patient into the device's tiny circuits where ELISA is performed over and over.

 Excerpted from "Stanford researchers develop lab-on-a-chip that turns blood test snapshots into continuous movies", December 21, 2020.

Related News

image of prof Stephen P. Boyd
January 2021

The Boyd group's CVXGEN software has been used in all SpaceX Falcon 9 first stage landings.  

From spacex.com: Falcon 9 is a reusable, two-stage rocket designed and manufactured by SpaceX for the reliable and safe transport of people and payloads into Earth orbit and beyond. Falcon 9 is the world's first orbital class reusable rocket. Reusability allows SpaceX to refly the most expensive parts of the rocket, which in turn drives down the cost of space access.

On December 9, Starship serial number 8 (SN8) lifted off from a Cameron County launch pad and successfully ascended, transitioned propellant, and performed its landing flip maneuver with precise flap control to reach its landing point. Low pressure in the fuel header tank during the landing burn led to high touchdown velocity resulting in a hard (and exciting!) landing. Re-watch SN8's flight here

 

Although Stephen doesn't plan to travel to Mars, he's thrilled that one day, some of his and his students' work will.

image of profs Wetzstein, Fan, Miller
December 2020

Professors Gordon Wetzstein, Shanhui Fan, and David A. B. Miller collaborated with faculty at several other institutions, to publish, "Inference in artificial intelligence with deep optics and photonics". 

Abstract: Artificial intelligence tasks across numerous applications require accelerators for fast and low-power execution. Optical computing systems may be able to meet these domain-specific needs but, despite half a century of research, general-purpose optical computing systems have yet to mature into a practical technology. Artificial intelligence inference, however, especially for visual computing applications, may offer opportunities for inference based on optical and photonic systems. In this Perspective, we review recent work on optical computing for artificial intelligence applications and discuss its promise and challenges.

Additional authors are Aydogan Ozcan, Sylvain Gigan, Dirk Englund, Marin Soljačić, Cornelia Denz, and Demetri Psaltis.

 

Related

image of prof Amin Arbabian
December 2020

Professor Amin Arbabian, Aidan Fitzpatrick (PhD candidate), and Ajay Singhvi (PhD candidate) have developed an airborne method for imaging underwater objects by combining light and sound to break through the seemingly impassable barrier at the interface of air and water.

The researchers envision their hybrid optical-acoustic system one day being used to conduct drone-based biological marine surveys from the air, carry out large-scale aerial searches of sunken ships and planes, and map the ocean depths with a similar speed and level of detail as Earth's landscapes. Their "Photoacoustic Airborne Sonar System" is detailed in a recent study published in the journal IEEE Access.

"Airborne and spaceborne radar and laser-based, or LIDAR, systems have been able to map Earth's landscapes for decades. Radar signals are even able to penetrate cloud coverage and canopy coverage. However, seawater is much too absorptive for imaging into the water," reports Amin. "Our goal is to develop a more robust system which can image even through murky water."

 

Excerpted from "Stanford engineers combine light and sound to see underwater", Stanford News, November 30, 2020

 

Related

image of prof Nick McKeown
December 2020
Professor Nick McKeown will receive the 2021 IEEE Alexander Graham Bell medal, for exceptional contributions to communications and networking sciences and engineering. The IEEE Alexander Graham Bell Medal was established in 1976, in commemoration of the centennial of the telephone's invention, to provide recognition for outstanding contributions to telecommunications.
 
The award will be presented to Nick at a future IEEE Honors Ceremony.
 
Nick researches techniques to improve the Internet. Most of this work has focused on the architecture, design, analysis, and implementation of high-performance Internet switches and routers. More recently, his interests have broadened to include network architecture, backbone network design, congestion control; and how the Internet might be redesigned if we were to start with a clean slate.
 
Please join us in congratulating Nick on this well-deserved honor!
  

Related 

image of prof Kwabena Boahen
November 2020

Professor Kwabena Boahen builds highly efficient "neuromorphic" supercomputers modeled on the human brain.

He hopes they will drive the artificial intelligence future. He uses an analogy when describing the goal of his work: "It's LA versus Manhattan."

Kwabena means structurally. Today's chips are two dimensional — flat and spread out, like LA. Tomorrow's chips will be stacked, like the floors of the skyscrapers on a New York block. In this analogy, the humans are the electrons shuffling data back and forth. The shorter distances they have to travel to work, and the more they can accomplish before traveling home, will drive profound leaps in energy efficiency. The consequences could not be greater. Kwabena says that the lean chips he imagines could prove tens-of-thousands times less expensive to operate than today's power hogs.

To learn how it works, listen in as Kwabena Boahen describes neuromorphic computing to fellow bioengineer Russ Altman in a recent episode of Stanford Engineering's The Future of Everything podcast.

 

Excerpted from Stanford Engineering's Research & Ideas

image of prof James Zou
November 2020

Professor James Zou, says that as algorithms compete for clicks and the associated user data, they become more specialized for subpopulations that gravitate to their sites. This can have serious implications for both companies and consumers.

This is described in a paper "Competing AI: How does competition feedback affect machine learning?", written by Antonio Ginart (EE PhD candidate), Eva Zhang, and professor James Zou.

James' team recognized that there's a feedback dynamic at play if companies' machine learning algorithms are competing for users or customers and at the same time using customer data to train their model. "By winning customers, they're getting a new set of data from those customers, and then by updating their models on this new set of data, they're actually then changing the model and biasing it toward the new customers they've won over," says Antonio Ginart.

In terms of next steps, the team is looking at the effect that buying datasets (rather than collecting data only from customers) might have on algorithmic competition. James is also interested in identifying some prescriptive solutions that his team can recommend to policymakers or individual companies. "What do we do to reduce these kinds of biases now that we have identified the problem?" he says.

"This is still very new and quite cutting-edge work," James says. "I hope this paper sparks researchers to study competition between AI algorithms, as well as the social impact of that competition."


 

Excerpted from "When Algorithms Compete, Who Wins?"

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition.

image of prof. Chelsea Finn
November 2020

Congratulations to Professor Chelsea Finn. She has been awarded an inaugural Samsung AI Researcher of the Year award. Presented at Samsung AI Forum 2020, the five recipients are AI researchers from around the world.

At the event, Chelsea's lecture was titled, "From Few-Shot Adaptation to Uncovering Symmetries". In her lecture, she introduced meta learning technologies in which AI, in spite of changes in data, can adapt swiftly to untrained data, and proceeded to share success stories of the application of these technologies in the areas of robotics and new drug candidate material design.

Chelsea's research interests lie in the ability to enable robots and other agents to develop broadly intelligent behavior through learning and interaction. Her work lies at the intersection of machine learning and robotic control, including topics such as end-to-end learning of visual perception and robotic manipulation skills, deep reinforcement learning of general skills from autonomously collected experience, and meta-learning algorithms that can enable fast learning of new concepts and behaviors.

Please join us in congratulating Chelsea on this well-deserved distinction! Additional awards went to Prof. Kyunghyun Cho (New York University), Prof. Seth Flaxman (Imperial College London), Prof. Jiajun Wu (Stanford), and Prof. Cho-Jui Hsieh (UCLA).

Excerpted from Samsung Newsroom, "[Samsung AI Forum 2020] Day 1: How AI Can Make a Meaningful Impact on Real World Issues"

 

Related News

image of prof. Dorsa Sadigh
November 2020

Professor Dorsa Sadigh and her team have integrated algorithms in a novel way that makes controlling assistive robotic arms faster and easier. The team hopes their research will enable people with disabilities to conduct everyday tasks on their own– for example, cooking and eating.

Dorsa's team, which included engineering graduate student Hong Jun Jeon and computer science postdoctoral scholar Dylan P. Losey, developed a controller that blends two artificial intelligence algorithms. The first, which was developed by Dorsa's group, enables control in two dimensions on a joystick without the need to switch between modes. It uses contextual cues to determine whether a user is reaching for a doorknob or a drinking cup, for example. Then, as the robot arm nears its destination, the second algorithm kicks in to allow more precise movements, with control shared between the human and the robot.

In shared autonomy, the robot begins with a set of "beliefs" about what the controller is telling it to do and gains confidence about the goal as additional instructions are given. Since robots aren't actually sentient, these beliefs are really just probabilities. For example, faced with two cups of water, a robot might begin with a belief that there's an even chance it should pick up either one. But as the joystick directs it toward one cup and away from the other, the robot gains confidence about the goal and can begin to take over – sharing autonomy with the user to more precisely control the robot arm. The amount of control the robot takes on is probabilistic as well: If the robot has 80 percent confidence that it's going to cup A rather than cup B, it will take 80 percent of the control while the human still has 20 percent, explains Professor Dorsa Sadigh.

 

Excerpted from HAI (Human-Centered Artificial Intelligence), "Assistive Feeding: AI Improves Control of Robot Arms"

Video, "Shared Autonomy with Learned Latent Actions"

Pages

Subscribe to RSS - Faculty