Faculty

EE Prof. H.S.- Philip Wong
October 2019

Professor H.S. Philip Wong has been awarded the IEEE Electron Devices Society J.J. Ebers Award. This is the society's highest honor recognizing outstanding technical contributions to the field of electron devices that have made a lasting impact.

The award will be presented to Philip at the 2019 International Electron Devices Meeting in December. The Jewell James Ebers Award was established in 1971 with the intention to foster progress in electron devices and to commemorate the life activities of Jewell James Ebers, whose distinguished contributions, particularly in the transistor art, shaped the understanding and technology of electron devices.

Philip is the Willard R. and Inez Kerr Bell Professor in the School of Engineering. He is professor of Electrical Engineering and affiliate faculty of Bio-X, Precourt Institute for Energy, and Wu Tsai Neurosciences Institute. Philip's present research covers a broad range of topics including carbon electronics, 2D layered materials, wireless implantable biosensors, directed self-assembly, nanoelectromechanical relays, device modeling, brain-inspired computing, and non-volatile memory devices such as phase change memory and metal oxide resistance change memory.

Please join us in congratulating Philip on this well-deserved honor!

 

Related News

image of emeritus prof Stephen E Harris
September 2019

Emeritus Professor Stephen E. Harris, the Kenneth and Barbara Oshman Professor in the School of Engineering, has been awarded the 2020 Willis E. Lamb Award for Laser Science and Quantum Optics. He will receive the award at the 2020 Physics of Quantum Electronics (PQE) Golden Jubilee - the 50th year of the annual meeting.

Stephen joined our faculty after completing his PhD (and MS) in Electrical Engineering at Stanford. He is known for his contributions to electromagnetically induced transparency (EIT)– a technique for eliminating the effect of a medium on a propagating beam of electromagnetic radiation. Additionally, he is known for his collaboration with others, producing results in many areas, including lasers, quantum electronics, atomic physics, and nonlinear optics.

Stephen E. Harris is part of Stanford's Ginzton Lab, Q-FARM, and emeritus professor of Electrical Engineering and Applied Physics.

 

Please join us in recognizing Stephen for his tremendous contributions to a variety of scientific fields!

Photo of Professor Stephen E. Harris, date unknown. source: SALLIE, Stanford's Image Exchange.

 

 

image of EE professor Eric Pop
August 2019

EE Professor Eric Pop's research was recently published in Science Advances.

Research in the Pop Lab has shown that a few layers of 2D materials can provide the same insulation as a sheet of glass 100 times thicker. "Thinner heat shields will enable engineers to make electronic devices even more compact than those we have today. We're looking at the heat in electronic devices in an entirely new way," reports Pop.

Detecting thermal vibrations
Thinking about heat as a form of sound inspired the Pop Lab researchers to borrow some principles from the physical world. "We adapted that idea by creating an insulator that used several layers of atomically thin materials instead of a thick mass of glass," said lead author Sam Vaziri, Electrical Engineering postdoc.

The team used up to four different compounds: graphene, molybdenum diselenide, molybdenum disulfide and tungsten diselenide – each three atoms thick – to create a four-layered insulator just 10 atoms deep. Despite its thinness, the insulator is effective because the atomic heat vibrations are dampened and lose much of their energy as they pass through each layer.

"As engineers, we know quite a lot about how to control electricity, and we're getting better with light, but we're just starting to understand how to manipulate the high-frequency sound that manifests itself as heat at the atomic scale," Pop said.


 

Related Links:

This research was supported by the Stanford Nanofabrication Facility, the Stanford Nano Shared Facilities, the National Science Foundation, the Semiconductor Research Corporation, the Defense Advanced Research Projects Agency, the Air Force Office of Scientific Research, the Stanford SystemX Alliance, the Knut and Alice Wallenberg Foundation, the Stanford Graduate Fellowship program and the National Institute of Standards and Technology. (ANI)

image of professor emeritus Hellman. Photo Credit: Michael Steven Walker
July 2019

Martin E. Hellman was the Heidelberg Lecturer at the 69th Lindau Nobel Laureate Meeting (#LINO19). The annual, week-long event occurs each summer on Germany's Lindau Island. Nobel Laureates are invited to the meeting, along with select young scientists. The Heidelberg Lecture is given by one of the Heidelberg Laureates, the winners of the top prizes in mathematics and computer science. Professor Hellman became a Heidelberg Laureate when he received the ACM Turing Award in 2015 for joint work with Whitfield Diffie, for making critical contributions to modern cryptography.

Martin's lecture, "The Technological Imperative for Ethical Evolution" called for scientists and laureates to accelerate the trend toward more ethical behavior. Hellman drew parallels between global and personal relationships as a foundation to build trust and security – regardless of past adversarial history. He shared 8 lessons from his own personal and professional evolution.

Martin encouraged #LINO19 attendees to revisit the Mainau Declaration of 1955 and the Mainau Declaration of 2015, thereby underscoring the efforts of prior attendees – and the responsibilities of today's attendees – to consider global and future consequences when making decisions and to appeal to decision-makers to do the same.

Hellman's Heidelberg Lecture is available online.

The 69th Lindau Nobel Laureate Meeting hosted 39 laureates and 600 young scientists from 89 countries–the highest number to date. This year's meeting was dedicated to physics. The key topics were dark matter and cosmology, laser physics and gravitational waves.


Martin E. Hellman is Professor Emeritus of Electrical Engineering at Stanford University and is affiliated with the university's Center for International Security and Cooperation (CISAC). His recent technical work has focused on rethinking national security, including bringing a risk informed framework to a potential failure of nuclear deterrence and then using that approach to find surprising ways to reduce the risk. His earlier work included co-inventing public key cryptography, the technology that underlies the secure portion of the Internet. His many honors include election to the National Academy of Engineering and receiving (jointly with his colleague Whit Diffie) the million dollar ACM Turing Award, the top prize in computer science. One of his recent projects is a book, jointly written with his wife of fifty years, "A New Map for Relationships: Creating True Love at Home & Peace on the Planet," that one reviewer said provides a "unified field theory" of peace by illuminating the connections between nuclear war, conventional war, interpersonal war, and war within our own psyches.

image of Martin Hellman, Heidelberg Lecture, Lindau Nobel Laureate Meeting 2019

Martin Hellman speaking at the Lindau Nobel Laureate Meetings. Photo credit: Julia Nimke/Lindau Nobel Laureate Meetings

image of professor Gordon Wetzstein
July 2019

Gordon Wetzstein was awarded the Presidential Early Career Awards for Scientists and Engineers (PECASE). This is the highest honor bestowed by the United States Government on science and engineering professionals in the early stages of their independent research careers.

Gordon is an assistant professor of Electrical Engineering and, by courtesy, of Computer Science. He is the leader of the Stanford Computational Imaging Lab, an interdisciplinary research group focused on advancing imaging, microscopy, and display systems.

Eleven other Stanford faculty also received the Presidential Early Career Awards for Scientists and Engineers (PECASE). Link to article below.

 

Please join us in congratulating Gordon for this recognition.


 

Related news:

image of Professor Subhasish Mitra
July 2019

In a recent QandA discussion with Stanford Engineering, EE professor Subhasish Mitra and Computer Science professor Clark Barrett, describe their recent work to secure chips before they are manufactured.

What's new when it comes to finding bugs in chips?

Designers have always tried to find logic flaws, or bugs as they are called, before chips went into manufacturing. Otherwise, hackers might exploit these flaws to hijack computers or cause malfunctions. This has been called debugging and it has never been easy. Yet we are now starting to discover a new type of chip vulnerability that is different from so-called bugs. These new weaknesses do not arise from logic flaws. Instead, hackers can figure out how to misuse a feature that has been purposely designed into a chip. There is not a flaw in the logic. But hackers might be able to pervert the logic to steal sensitive data or take over the chip.

How do your algorithms deal with traditional bugs and these new unintended weaknesses?

Let's start with the traditional bugs. We developed a technique called Symbolic Quick Error Detection — or Symbolic QED. Essentially, we use new algorithms to examine chip designs for potential logic flaws or bugs. We recently tested our algorithms on 16 processors that were already being used to help control critical automotive systems like braking and steering. Before these chips went into cars, the designers had already spent five years debugging their own processors using state-of-the-art techniques and fixing all the bugs they found. After using Symbolic QED for one month, we found every bug they'd found in 60 months — and then we found some bugs that were still in the chips. This was a validation of our approach. We think that by using Symbolic QED before a chip goes into manufacturing we'll be able to find and fix more logic flaws in less time.

Does Symbolic QED find all vulnerabilities?

Not in its current incarnation. Through collaboration with other research groups, we have modified Symbolic QED to detect new types of attacks that can come from potential misuse of seemingly innocuous features.

This is just the beginning. The processors we tested were relatively simple. Yet, as we saw, they could be perverted. Over time we will develop more sophisticated algorithms to detect and fix the most sophisticated chips, like the ones responsible for controlling navigation systems on autonomous cars. Our message is simple: As we develop more chips for more critical tasks, we'll need automated systems to find and fix all potential vulnerabilities — traditional bugs and unintended consequences — before chips go into manufacturing. Otherwise we'll always be playing catch up, trying to patch chips after hackers find the vulnerabilities.

Excerpted from "Q&A: What's new in the effort to prevent hackers from hijacking chips?"


 

Related 

 

professor Krishna Shenoy
July 2019

 

Professor Krishna Shenoy's research team has found that using statistical theory to analyze neural activity provides a faster and equally accurate process.

Krishna's team has circumvented today's painstaking process of tracking the activity of individual neurons in favor of decoding neural activity in the aggregate. Each time a neuron fires it sends an electrical signal — known as a "spike" — to the next neuron down the line. It's the sort of intercellular communication that turns a notion in the mind into muscle contraction elsewhere in the body. "Each neuron has its own electrical fingerprint and no two are identical," says Eric Trautmann, a postdoctoral researcher in Krishna's lab and first author of the paper. "We spend a lot of time isolating and studying the activity of individual neurons."

The team believes their work will ultimately lead to neural implants that use simpler electronics to track more neurons than ever before, and also do so more accurately. The key is to combine their sophisticated new sampling algorithms with small electrodes. So far, such small electrodes have only been employed to control simple devices like a computer mouse. But combining this hardware for recording brain signals with the sampling algorithms creates new possibilities. Researchers might be able to deploy a network of small electrodes through larger sections of the brain, and use the algorithms to sample a great many neurons. This could deliver enough accurate brain signal information to control a prosthetic hand capable of fast and precise motions like pitching a baseball or playing the violin.

Better yet, Trautmann said, the new electrodes, coupled with the sampling algorithms, should eventually be able to record brain activity without the many wires needed today to carry signals from the brain to whatever computer controls the prosthesis. Wireless functionality would completely untether users from bulky computers needed to decode neuronal activity today.

Krishna reports, "This study has a bit of a hopeful message in that observing activity in the brain turns out to be easier than we initially expected."

The paper, "Accurate Estimation of Neural Population Dynamics without Spike Sorting" was published in June's issue of Neuron.

Excerpted from Stanford Engineering news

Related Links

July 2019

Professor Gordon Wetzstein and team recently published their findings in Science Advances.

The researchers have created a pair of smart glasses that can automatically focus on what you're looking at. Using eye-trackers and autofocus lenses, the prototype works much like the lens of the eye, with fluid-filled lenses that bulge and thin as the field of vision changes. It also includes eye-tracking sensors that triangulate where a person is looking and determine the precise distance to the object of interest. The team did not invent these lenses or eye-trackers, but they did develop the software system that harnesses this eye-tracking data to keep the fluid-filled lenses in constant and perfect focus.

EE PhD candidate Nitish Padmanaban, said other teams had previously tried to apply autofocus lenses to presbyopia. But without guidance from the eye-tracking hardware and system software, those earlier efforts were no better than wearing traditional progressive lenses.

Gordon's team tested the prototype on 56 people with presbyopia. Test subjects said the autofocus lenses performed better and faster at reading and other tasks. Wearers also tended to prefer the autofocal glasses to the experience of progressive lenses – bulk and weight aside.

Gordon's Computational Imaging Lab is at the forefront of vision systems for VR and AR (virtual and augmented reality). It was in the course of such work that the researchers became aware of the new autofocus lenses and eye-trackers and had the insight to combine these elements to create a potentially transformative product.

Excerpted from Stanford News.

 

Related Links

 


 

 

image of EE and CS professor Dorsa Sadigh
July 2019

Professor Dorsa Sadigh and her lab have combined two different ways of setting goals for robots into a single process, which performed better than either of its parts alone in both simulations and real-world experiments. The researchers presented their findings at the 2019 Robotics: Science & Systems (RSS) Conference.

The team has coined their approach, "DemPref" – DemPref uses both demonstrations and preference queries to learn a reward function. Specifically, "(1) using the demonstrations to learn a coarse prior over the space of reward functions, to reduce the effective size of the space from which queries are generated; and (2) use the demonstrations to ground the (active) query generation process, to improve the quality of the generated queries. Our method alleviates the efficiency issues faced by standard preference-based learning methods and does not exclusively depend on (possibly low-quality) demonstrations," as described in the team's abstract.

The new combination system begins with a person demonstrating a behavior to the robot. That can give autonomous robots a lot of information, but the robot often struggles to determine what parts of the demonstration are important. People also don't always want a robot to behave just like the human that trained it.

"We can't always give demonstrations, and even when we can, we often can't rely on the information people give," said Erdem Biyik, EE PhD candidate, who led the work developing the multiple-question surveys. "For example, previous studies have shown people want autonomous cars to drive less aggressively than they do themselves."

That's where the surveys come in, giving the robot a way of asking, for example, whether the user prefers it move its arm low to the ground or up toward the ceiling. For this study, the group used the slower single question method, but they plan to integrate multiple-question surveys in later work.

In tests, the team found that combining demonstrations and surveys was faster than just specifying preferences and, when compared with demonstrations alone, about 80 percent of people preferred how the robot behaved when trained with the combined system.

 

"This is a step in better understanding what people want or expect from a robot," reports Dorsa. "Our work is making it easier and more efficient for humans to interact and teach robots, and I am excited about taking this work further, particularly in studying how robots and humans might learn from each other."

Excerpted from Stanford News article (link below).

 

Related Links

Dr. Irena Fischer-Hwang, EE PhD 2019
June 2019

Excerpted from "Stanford grad trades STEM for storytelling," June 2019.

 

EE graduate, Irena Fisher-Hwang,PhD '19 said she realized early in her graduate studies that it was important to communicate science to wider audiences.

"There's an unexpected side to science that is really fun to communicate to people," she said. "I think if we can make science more approachable, it could really help people understand why scientists do what they do."

Little did she realize that her earlier interest in science communication would lead her to a new career path.

Irena's first foray into storytelling was through Goggles Optional, a humorous science podcast written, produced and hosted by Stanford graduate students. For the past two years, she has written and hosted dozens of episodes of the show, including one about lucid dreaming and another about how sound can hack smartphones.

"In academic science, not a lot of people will be able to understand what you are working on," she said. "But the whole goal of journalism is to take difficult concepts and explain them to the public in interesting ways."

Irena's doctoral adviser, Professor Tsachy Weissman, was so impressed by her journalistic leanings that he asked her to help him launch a podcast about his own field of expertise, information theory. Irena produced the pilot episode of the series and then trained 14 students in the new freshman seminar EE25N: The Science of Information to write scripts and edit audio so they can continue producing the series.

Turning Data analysis into storytelling

In communication Professor James Hamilton's class, Irena discovered how journalists and computer scientists are overlapping, as many reporters are now turning to big data analysis to help them with their reporting.

Irena found that the skills she honed during her graduate studies – sorting and evaluating data, managing large amounts of information and running statistical analysis, for example – are as relevant in the newsroom as they are in the lab.

"Now, finally, I feel like I've found a great way to combine my love of human stories with my rigorous training in STEM through journalism. I've had this creative streak for as long as I can remember, but until recently I didn't know what to do with it."

 

Please join us in congratulating Irena, and we look forward to seeing her on campus in the fall quarter!

 

Related News and Links:

Pages

Subscribe to RSS - Faculty