Faculty

image of prof Kwabena Boahen
November 2020

Professor Kwabena Boahen builds highly efficient "neuromorphic" supercomputers modeled on the human brain.

He hopes they will drive the artificial intelligence future. He uses an analogy when describing the goal of his work: "It's LA versus Manhattan."

Kwabena means structurally. Today's chips are two dimensional — flat and spread out, like LA. Tomorrow's chips will be stacked, like the floors of the skyscrapers on a New York block. In this analogy, the humans are the electrons shuffling data back and forth. The shorter distances they have to travel to work, and the more they can accomplish before traveling home, will drive profound leaps in energy efficiency. The consequences could not be greater. Kwabena says that the lean chips he imagines could prove tens-of-thousands times less expensive to operate than today's power hogs.

To learn how it works, listen in as Kwabena Boahen describes neuromorphic computing to fellow bioengineer Russ Altman in a recent episode of Stanford Engineering's The Future of Everything podcast.

 

Excerpted from Stanford Engineering's Research & Ideas

image of prof James Zou
November 2020

Professor James Zou, says that as algorithms compete for clicks and the associated user data, they become more specialized for subpopulations that gravitate to their sites. This can have serious implications for both companies and consumers.

This is described in a paper "Competing AI: How does competition feedback affect machine learning?", written by Antonio Ginart (EE PhD candidate), Eva Zhang, and professor James Zou.

James' team recognized that there's a feedback dynamic at play if companies' machine learning algorithms are competing for users or customers and at the same time using customer data to train their model. "By winning customers, they're getting a new set of data from those customers, and then by updating their models on this new set of data, they're actually then changing the model and biasing it toward the new customers they've won over," says Antonio Ginart.

In terms of next steps, the team is looking at the effect that buying datasets (rather than collecting data only from customers) might have on algorithmic competition. James is also interested in identifying some prescriptive solutions that his team can recommend to policymakers or individual companies. "What do we do to reduce these kinds of biases now that we have identified the problem?" he says.

"This is still very new and quite cutting-edge work," James says. "I hope this paper sparks researchers to study competition between AI algorithms, as well as the social impact of that competition."


 

Excerpted from "When Algorithms Compete, Who Wins?"

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition.

image of prof. Chelsea Finn
November 2020

Congratulations to Professor Chelsea Finn. She has been awarded an inaugural Samsung AI Researcher of the Year award. Presented at Samsung AI Forum 2020, the five recipients are AI researchers from around the world.

At the event, Chelsea's lecture was titled, "From Few-Shot Adaptation to Uncovering Symmetries". In her lecture, she introduced meta learning technologies in which AI, in spite of changes in data, can adapt swiftly to untrained data, and proceeded to share success stories of the application of these technologies in the areas of robotics and new drug candidate material design.

Chelsea's research interests lie in the ability to enable robots and other agents to develop broadly intelligent behavior through learning and interaction. Her work lies at the intersection of machine learning and robotic control, including topics such as end-to-end learning of visual perception and robotic manipulation skills, deep reinforcement learning of general skills from autonomously collected experience, and meta-learning algorithms that can enable fast learning of new concepts and behaviors.

Please join us in congratulating Chelsea on this well-deserved distinction! Additional awards went to Prof. Kyunghyun Cho (New York University), Prof. Seth Flaxman (Imperial College London), Prof. Jiajun Wu (Stanford), and Prof. Cho-Jui Hsieh (UCLA).

Excerpted from Samsung Newsroom, "[Samsung AI Forum 2020] Day 1: How AI Can Make a Meaningful Impact on Real World Issues"

 

Related News

image of prof. Dorsa Sadigh
November 2020

Professor Dorsa Sadigh and her team have integrated algorithms in a novel way that makes controlling assistive robotic arms faster and easier. The team hopes their research will enable people with disabilities to conduct everyday tasks on their own– for example, cooking and eating.

Dorsa's team, which included engineering graduate student Hong Jun Jeon and computer science postdoctoral scholar Dylan P. Losey, developed a controller that blends two artificial intelligence algorithms. The first, which was developed by Dorsa's group, enables control in two dimensions on a joystick without the need to switch between modes. It uses contextual cues to determine whether a user is reaching for a doorknob or a drinking cup, for example. Then, as the robot arm nears its destination, the second algorithm kicks in to allow more precise movements, with control shared between the human and the robot.

In shared autonomy, the robot begins with a set of "beliefs" about what the controller is telling it to do and gains confidence about the goal as additional instructions are given. Since robots aren't actually sentient, these beliefs are really just probabilities. For example, faced with two cups of water, a robot might begin with a belief that there's an even chance it should pick up either one. But as the joystick directs it toward one cup and away from the other, the robot gains confidence about the goal and can begin to take over – sharing autonomy with the user to more precisely control the robot arm. The amount of control the robot takes on is probabilistic as well: If the robot has 80 percent confidence that it's going to cup A rather than cup B, it will take 80 percent of the control while the human still has 20 percent, explains Professor Dorsa Sadigh.

 

Excerpted from HAI (Human-Centered Artificial Intelligence), "Assistive Feeding: AI Improves Control of Robot Arms"

Video, "Shared Autonomy with Learned Latent Actions"

image of prof. Andrea Montanari
October 2020

Professor Andrea Montanari, along with researchers from other institutions, have launched their first project: the Collaboration on the Theoretical Foundations of Deep Learning. The project is led by UC Berkeley researchers and has received five years of funding from NSF and Simons Foundation.

The project aims to gain a theoretical understanding of deep learning, which is making significant impacts across industry, commerce, science, and society.

Although deep learning is a widely used artificial intelligence approach for teaching computers to learn from data, its theoretical foundations are poorly understood, a challenge that the project will address. Understanding the mechanisms that underpin the practical success of deep learning will allow researchers to address its limitations, including its sensitivity to data manipulation.

The other institutions include UC Berkeley, the Massachusetts Institute of Technology, UC Irvine, UC San Diego, Toyota Technological Institute at Chicago, EPFL in Lausanne, Switzerland, and the Hebrew University in Jerusalem.

Professor Andrea Montanari's research spans several disciplines including statistics, computer science, information theory, and machine learning.

 

Excerpted from "UC Berkeley to lead $10M NSF/Simons Foundation program to investigate theoretical underpinnings of deep learning", August 2020

 

Related News

 

image of prof Gordon Wetzstein and EE PhD candidate David Lindell
September 2020

Professor Gordon Wetzstein and EE PhD candidate David Lindell, have created a system that reconstructs shapes obscured by 1-inch-thick foam. Their tests are detailed in, "Three-dimensional imaging through scattering media based on confocal diffuse tomography", published in Nature Communications.

Gordon Wetzstein reports, "A lot of imaging techniques make images look a little bit better, a little bit less noisy, but this is really something where we make the invisible visible. This is really pushing the frontier of what may be possible with any kind of sensing system. It's like superhuman vision."

"We were interested in being able to image through scattering media without these assumptions and to collect all the photons that have been scattered to reconstruct the image," said David Lindell, EE PhD candidate and lead author of the paper. "This makes our system especially useful for large-scale applications, where there would be very few ballistic photons."

In order to make their algorithm amenable to the complexities of scattering, the researchers had to closely co-design their hardware and software, although the hardware components they used are only slightly more advanced than what is currently found in autonomous cars. Depending on the brightness of the hidden objects, scanning in their tests took anywhere from one minute to one hour, but the algorithm reconstructed the obscured scene in real-time and could be run on a laptop.

"You couldn't see through the foam with your own eyes, and even just looking at the photon measurements from the detector, you really don't see anything," said David. "But, with just a handful of photons, the reconstruction algorithm can expose these objects – and you can see not only what they look like, but where they are in 3D space."

Excerpted from Stanford News, "Stanford researchers devise way to see through clouds and fog", September 2020.


Related News

image of Prof Dan Boneh
September 2020

Professor Dan Boneh's Hidden Number Problem helped academic researchers identify and resolve a vulnerability. Dan leads the Applied Cryptography Group.

The attack – known as Raccoon – affects TLS 1.2 and previous versions, which specify that any leading bytes beginning with zero in the premaster secret are stripped out. The premaster secret is the shared key used by the client and server to compute the subsequent TLS keys for each session.

"Since the resulting premaster secret is used as an input into the key derivation function, which is based on hash functions with different timing profiles, precise timing measurements may enable an attacker to construct an oracle from a TLS server. This oracle tells the attacker whether a computed premaster secret starts with zero or not," the description of the attack says.

"Based on the server timing behavior, the attacker can find values leading to premaster secrets starting with zero. In the end, this helps the attacker to construct a set of equations and use a solver for the Hidden Number Problem (HNP) to compute the original premaster secret established between the client and the server."

Excerpted from "Raccoon Attack can Compromise Some TLS Connections", by Dennis Fisher


In addition to leading the applied cryptography group, Dan co-directs the computer security lab. His research focuses on applications of cryptography to computer security. His work includes cryptosystems with novel properties, web security, security for mobile devices, and cryptanalysis.

Related News

 

 

image of prof. Shanhui Fan
August 2020

Professor Shanhui Fan's rooftop cooling system could eventually help meet the need for nighttime lighting in urban areas, or provide lighting in developing countries.

Using commercially available technology, the research team has designed an off-grid, low-cost modular energy source that can efficiently produce power at night.

Although solar power brings many benefits, its use depends heavily on the distribution of sunlight, which can be limited in many locations and is completely unavailable at night. Systems that store energy produced during the day are typically expensive, thus driving up the cost of using solar power.

To find a less-expensive alternative, researchers led by professor Shanhui Fan looked to radiative cooling. Their approach uses the temperature difference resulting from heat absorbed from the surrounding air and the radiant cooling effect of cold space to generate electricity.

In The Optical Society (OSA) journal Optics Express, the researchers theoretically demonstrate an optimized radiative cooling approach that can generate 2.2 Watts per square meter with a rooftop device that doesn't require a battery or any external energy. This is about 120 times the amount of energy that has been experimentally demonstrated and enough to power modular sensors such as ones used in security or environmental applications.

"We are working to develop high-performance, sustainable lighting generation that can provide everyone–including those in developing and rural areas–access to reliable and sustainable low cost lighting energy sources," said Lingling Fan, EE PhD candidate and first author of the paper. "A modular energy source could also power off-grid sensors used in a variety of applications and be used to convert waste heat from automobiles into usable power."

Additional authors include Wei Li (EE PhD candidate), and post-doctoral researcher Weiliang Jin, PhD, and Meir Orenstein (Technion-Israel Institute of Technology).

 

 

Excerpted from Science Daily, "Efficient low-cost system for producing power at night".

 

image of prof. Gordon Wetzstein
August 2020

Professor Gordon Wetzstein and team use AI to revolutionize real-time holography.

"The big challenge has been that we don't have algorithms that are good enough to model all the physical aspects of how light propagates in a complex optical system such as AR eyeglasses," reports Gordon. "The algorithms we have at the moment are limited in two ways. They're computationally inefficient, so it takes too long to constantly update the images. And in practice, the images don't look that good."


Gordon says the new approach makes big advances on both real-time image generation and image quality. In heads-up comparisons, he says, the algorithms developed by their "Holonet" neural network generated clearer and more accurate 3-D images, on the spot, than the traditional holographic software.

That has big practical applications for virtual and augmented reality, well beyond the obvious arenas of gaming and virtual meetings. Real-time holography has tremendous potential for education, training, and remote work. An aircraft mechanic, for example, could learn by exploring the inside of a jet engine thousands of miles away, or a cardiac surgeon could practice a particularly challenging procedure.

In addition to professor Gordon Wetzstein, the system was created by Yifan Peng, a postdoctoral fellow in computer science; Suyeon Choi, an EE PhD candidate; Nitish Padmanaban, EE PhD '20; and Jonghyun Kim, a senior research scientist at Nvidia Corp.



Excerpted from: "Using AI to Revolutionize Real-Time Holography", August 17, 2020

image of professor emeritus James F. Gibbons
August 2020

James Gibbons has always been ahead of the times.

 

In a Q&A conversation with Stanford Engineering, Professor Emeritus James Gibbons shares lessons in remote learning experiments from the 1970s.

At the time of the research, James was asked to join President Nixon's Science Advisory Council, which was studying the effectiveness of televised education – dubbed "Tutored Video Instruction, or TVI".

A subset of the Science Advisory Council started by reviewing a very large study comparing televised classes with live classes. The study covered every subject matter from math to arts, from kindergarten to a baccalaureate degree. It was a huge study, 363 different experiments.

The overall answer was: There is no significant difference in student learning between TV and live instruction.

As the technology evolved, James and his colleagues began working with Sun Microsystems to create what was called distributed tutored video instruction – DTVI. He reports:

 

"We imagined the students to be remote from each other. We provided each of them with a microphone and a video camera to support remote communication within the group. We did an experiment at two campuses of the California State University system where we had 700 students at the two universities. We ran a regular lecture, a TVI group and a DTVI group for every class. The DTVI students were in their own rooms, connected to each other through our early version of the internet."

image of Gibbons' TVI research

"Sound familiar? Well, it should. This is exactly what Zoom does, right? In fact, it looked just like Zoom in the gallery view, with everyone wearing headsets and so forth. The results showed about the same performance academically, between TVI and DTVI, with each of them being superior to the live lecture class over a range of subjects."

 


In these days of COVID-19, everyone from parents to teachers to school administrators, not to mention the students themselves, is worried how this nationwide experiment in online learning is going to work out.

And from James' research findings, there should be no significant difference between online and in-person learning.

 

To read Stanford Engineering's Q&A article in its entirety, see "Lessons in remote learning from the 1970s: A Q&A with James Gibbons".

 

Pages

Subscribe to RSS - Faculty