research

Nikhil Garg, EE PhD '20 interdisciplinary research using machine-learning
April 2018

Lead author Nikhil Garg (PhD candidate '20) demonstrates that word embeddings can be used as a powerful tool to quantify historical trends and social change. His research team developed metrics based on word embeddings to characterize how gender stereotypes and attitudes toward ethnic minorities in the United States evolved during the 20th and 21st centuries starting from 1910. Their framework opens up a fruitful intersection between machine learning and quantitative social science.

Nikhil co-authored the paper with history Professor Londa Schiebinger, linguistics and computer science Professor Dan Jurafsky and biomedical data science Professor James Zou.

Their research shows that, over the past century, linguistic changes in gender and ethnic stereotypes correlated with major social movements and demographic changes in the U.S. Census data.

The researchers used word embeddings – an algorithmic technique that can map relationships and associations between words – to measure changes in gender and ethnic stereotypes over the past century in the United States. They analyzed large databases of American books, newspapers and other texts and looked at how those linguistic changes correlated with actual U.S. Census demographic data and major social shifts such as the women's movement in the 1960s and the increase in Asian immigration, according to the research.

"Word embeddings can be used as a microscope to study historical changes in stereotypes in our society," said James Zou, a courtesy professor of electrical engineering. "Our prior research has shown that embeddings effectively capture existing stereotypes and that those biases can be systematically removed. But we think that, instead of removing those stereotypes, we can also use embeddings as a historical lens for quantitative, linguistic and sociological analyses of biases."

"This type of research opens all kinds of doors to us," Schiebinger said. "It provides a new level of evidence that allow humanities scholars to go after questions about the evolution of stereotypes and biases at a scale that has never been done before."

"The starkness of the change in stereotypes stood out to me," Garg said. "When you study history, you learn about propaganda campaigns and these outdated views of foreign groups. But how much the literature produced at the time reflected those stereotypes was hard to appreciate." 

The new research illuminates the value of interdisciplinary teamwork between humanities and the sciences, researchers said.

"This led to a very interesting and fruitful collaboration," Schiebinger said, adding that members of the group are working on further research together. "It underscores the importance of humanists and computer scientists working together. There is a power to these new machine-learning methods in humanities research that is just being understood." 


 

Proceedings of the National Academy of Sciences, "Word embeddings quantify 100 years of gender and ethnic stereotypes" April 3,2018.  

Excerpted from Stanford News, "Stanford researchers use machine-learning algorithm to measure changes in gender, ethnic bias in U.S." April 3, 2018.

 

 

 

Graduate student David Lindell and Matt O’Toole, a post-doctoral scholar, work in the lab. (Image credit: L.A. Cicero)
March 2018

A driverless car is making its way through a winding neighborhood street, about to make a sharp turn onto a road where a child’s ball has just rolled. Although no person in the car can see that ball, the car stops to avoid it. This is because the car is outfitted with extremely sensitive laser technology that reflects off nearby objects to see around corners.

“It sounds like magic but the idea of non-line-of-sight imaging is actually feasible,” said Gordon Wetzstein, assistant professor of electrical engineering and senior author of the paper describing this work, published March 5 in Nature.

Related Links

image credit: L. Cicero
February 2018


Krishna Shenoy and team have been researching the use of brain machine interfaces (BMI) to assist people with paralysis. Recently, one of the researchers changed the task, requiring physical movement from a change in thought. He realized that the BMI would allow study of the mental rehearsal that occurs before the physical expression.

Although there are some important caveats, the results could point the way toward a deeper understanding of what mental rehearsal is and, the researchers believe, to a future where brain-machine interfaces, usually thought of as prosthetics for people with paralysis, are also tools for understanding the brain.

"Mental rehearsal is tantalizing, but difficult to study," said Saurabh Vyas, a graduate student in bioengineering and the paper's lead author. That's because there's no easy way to peer into a person's brain as he imagines himself racing to a win or practicing a performance. "This is where we thought brain-machine interfaces could be that lens, because they give you the ability to see what the brain is doing even when they're not actually moving," he said.

"We can't prove the connection beyond a shadow of a doubt," Krishna said, but "this is a major step in understanding what mental rehearsal may well be in all of us." The next steps, he and Vyas said, are to figure out how mental rehearsal relates to practice with a brain-machine interface – and how mental preparation, the key ingredient in transferring that practice to physical movements, relates to movement.

Meanwhile, Krishna said, the results demonstrate the potential of an entirely new tool for studying the mind. "It's like building a new tool and using it for something," he said. "We used a brain-machine interface to probe and advance basic science, and that's just super exciting."

Additional Stanford authors are Nir Even-Chen, a graduate student in electrical engineering, Sergey Stavisky, a postdoctoral fellow in neurosurgery, Stephen Ryu, an adjunct professor of electrical engineering, and Paul Nuyujukian, an assistant professor of bioengineering and of neurosurgery and a member of Stanford Bio-X and the Stanford Neurosciences Institute.

Funding for the study came from the National Institutes of Health, the National Science Foundation, a Ric Weiland Stanford Graduate Fellowship, a Bio-X Bowes Fellowship, the ALS Association, the Defense Advanced Research Projects Agency, the Simons Foundation and the Howard Hughes Medical Institute.

Excerpted from Stanford News, "Mental rehearsal prepares our minds for real-world action, Stanford researchers find," February 16, 2018.

 

Related News:

Research by PhD candidate and team detects errors from Neural Activity, November 2017.

Krishna Shenoy's translation device; turning thought into movement, March 2017.

Brain-Sensing Tech Developed by Krishna Shenoy and Team, September 2016.

Krishna Shenoy receives Inaugural Professorship, February 2017.

 

February 2018

Angad Rekhi (PhD candidate) and Amin Arbabian have developed a wake-up receiver that turns on a device in response to incoming ultrasonic signals – signals outside the range that humans can hear. By working at a significantly smaller wavelength and switching from radio waves to ultrasound, this receiver is much smaller than similar wake-up receivers that respond to radio signals, while operating at extremely low power and with extended range.

This wake-up receiver has many potential applications, particularly in designing the next generation of networked devices, including so-called "smart" devices that can communicate directly with one another without human intervention.

"As technology advances, people use it for applications that you could never have thought of. The internet and the cellphone are two great examples of that," said Rekhi. "I'm excited to see how people will use wake-up receivers to enable the next generation of the Internet of Things."

Excerpted from Stanford News, "Stanford researchers develop new method for waking up small electronic devices", February 12, 2018

 

Related news:

Amin's Research Team Powers Tiny Implantable Devices, December 2017.

Stanford Team led by Amin Arbabian receives DOE ARPA-E Award, January 2017.

Amin Arbabian receives Tau Beta Pi Undergrad Teaching Award, June 2016.

PhD candidate Nir Even-Chen
November 2017

PhD candidate Nir Even-Chen and his advisor, professor Krishna Shenoy, et al., share recent strides in brain-machine interface (BMI) innovation. BMIs are devices that record neural activity from the user's brain and translate it into movement of prosthetic devices. BMIs enable people with motor impairment, e.g. a spinal cord injury, to control and move prosthetic devices with their minds. They can control robotic arms for improving their independence or a computer cursor for typing and browsing the web. Even-Chen, et al's, recently published paper, "Augmenting intracortical brain-machine interface with neurally driven error detectors," describes a new system that reads users minds, detects when the user perceives a mistake, and intervenes with a corrective action. The new system allows users to control BMIs more easily, smoothly, and efficiently.

While most BMI studies focus on designing better techniques to infer the user's movement intention, Even-Chen, et al, improved the BMI performance by taking a very different approach, detecting and undoing mistakes. Their work presents both novel fundamental science and implementation of their idea. They showed for the first time that it is possible to detect key-selection errors from the motor cortex — a brain area mainly involved in movement control. Then, they used the data in real-time to undo—or even prevent—mistakes.

The need for real-time error correction

In our daily life, we all make mistakes, from typos during texting, clicking the wrong link on a web page, or knocking our cup of coffee over while reaching for the cake. Correcting these mistakes might be time-consuming, and annoying—especially when they occur frequently during challenging tasks. Imagine a system that could detect – or predict – your mistakes (e.g., typos) and automatically undo, or even prevent them from happening. This can save the time of manually correcting the mistake, especially when the errors are frequent and the actions to correct them slow you down. Error detection is not always trivial, in some cases only the person who made the mistake knows what she intended. Thus, such an error detection system needs to infer one's intention, i.e., read her mind. An automatic error detection system is most effective when the task is challenging or when our skill is limited, and errors are common. A good candidate for testing such an error detection approach is a BMI system. First, BMIs enable a readout of the user's mind. And second, it can be highly beneficial for BMI users, since BMI control is challenging and prone to errors.

Intracortical BMIs, which records neural activity directly from the brain, showed a promising result in pilot clinical trials and are the highest-performing BMI systems to date. This makes them prime candidates for serving as an assistive technology for people with paralysis. Although the performance of intracortical BMI systems has markedly improved in the last two decades, errors — such as selecting the wrong key during typing — still occur and their performance is far from able-bodied performance. 

Previously it was unknown if errors can be detected from the same brain region traditionally used for decoding BMI user's movement intention—the motor cortex. In their work, Even-Chen and colleagues found that when errors occur a characteristic brain activity can be observed. That brain activity pattern enables them to detect mistakes with high accuracy shortly after and even before they occurred.

This finding encouraged them to develop and implement first-of-its-kind error "detect-and-act" system. This system reads the user's mind, detects when the user thinks an error occurred, and can automatically "undo" or "prevent" them. The detect-and-act system works independently and in parallel to a traditional movement BMI that estimate user's movement intention (see figure). In a challenging BMI task that resulted in substantial errors, this approach improved the performance of a BMI. Using the detect-and-act system, hard tasks will have fewer errors and become easier, the use of a BMI will become smoother, and be less frustrating.

A detect-and-act system can potentially be used to improve how fast people with paralysis can type or control a robotic arm using a BMI. For example, automatically correcting a mistake when they type, or stopping the movement of a robotic arm when they are about to knock over their coffee. While this work has been done in pre-clinical trial with monkeys, Even-Chen and colleagues also presented encouraging preliminary results of a clinical trial (BrainGate2) at a conference, and showed the potential translation to humans.

 

Read more: Journal of Neural Engineering, "Augmenting intracortical brain-machine interface with neurally driven error detectors."
Additional authors include Sergey Stavisky, Jonathan Kao, Stephen Ryu, and Krishna Shenoy. 

 

July 2017

One day soon we may live in smart houses that cater to our habits and needs, or ride in autonomous cars that rely on embedded sensors to provide safety and convenience. But today's electronic devices may not be able handle the deluge of data such applications portend because of limitations in their materials and design, according to the authors of a Stanford-led experiment recently published in Nature.

To begin with, silicon transistors are no longer improving at their historic rate, which threatens to end the promise of smaller, faster computing known as Moore's Law. A second and related reason is computer design, say senior authors and Stanford EE professors Subhasish Mitra and H.-S. Philip Wong. Today's computers rely on separate logic and memory chips. These are laid out in two dimensions, like houses in a suburb, and connected by tiny wires, or interconnects, that become bottlenecked with data traffic.

Now, the Stanford team has created a chip that breaks this bottleneck in two ways: first, by using nanomaterials not based on silicon for both logic and memory, and second, by stacking these computation and storage layers vertically, like floors in a high-rise, with a plethora of elevator-like interconnects between the "floors" to eliminate delays. "This is the largest and most complex nanoelectronic system that has so far been made using the materials and nanotechnologies that are emerging to leapfrog silicon," said Mitra.

The team, whose other Stanford members include EE professors Roger Howe and Krishna Saraswat, integrated over 2 million non-silicon transistors and 1 million memory cells, in addition to on-chip sensors for detecting gases – a proof of principle for other tasks yet to be devised. "Electronic devices of these materials and three-dimensional design could ultimately give us computational systems 1,000 times more energy-efficient than anything we can build of silicon," Wong said.

First author Max Shulaker (PhD '16), who performed this work while a PhD candidate, is now an assistant professor at MIT and core member of its Microsystems Technology Laboratories. He explained in a single word why the team had to use emerging nanotechnologies and not conventional silicon technologies to achieve the high-rise design: heat. "Building silicon transistors involves temperatures of over 1,000 degrees Celsius," Shulaker said. "If you try to build a second layer on top of the first, you'll damage the bottom layer. This is why chips today have a single layer of circuitry."

The magic of the materials

The new prototype chip is a radical change from today's chips because it uses multiple nanotechnologies that can be fabricated at relatively low heat, Shulaker explained. Instead of relying on silicon-based transistors, the new chip uses carbon nanotubes, or CNTs, to perform computations. CNTs are sheets of 2-D carbon formed into nanocylinders. The new Naturepaper incorporates prior ground-breaking work by this team in developing the world's first all-CNT computer.

The memory component of the new chip also relied on new processes and materials improved upon by this team. Called resistive random-access memory (RRAM), this is a type of nonvolatile memory — meaning that it doesn't lose data when the power is turned off – that operates by changing the resistance of a solid dielectric material.

The key in this work is that CNT circuits and RRAM memory can be fabricated at temperatures below 200 Celsius. "This means they can be built up in layers without harming the circuits beneath," Shulaker says. "This truly is a remarkable feat of engineering," says Barbara De Salvo, scientific director at CEA-LETI, France, an international expert not connected with this project.

The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. By inserting a plethora of wires between these layers, this 3-D architecture promises to address the communication bottleneck. "In addition to improved devices, 3-D integration can address another key consideration in systems: the interconnects within and between chips," Saraswat said.

To demonstrate the potential of the technology, the researchers placed over a million carbon nanotube-based sensors on the surface of the chip, which they used to detect and classify ambient gases.

Due to the layering of sensing, data storage and computing, the chip was able to measure each of the sensors in parallel and then write directly into its memory, generating huge bandwidth without risk of hitting a bottleneck, because the 3-D design made it unnecessary to move data between chips. In fact, even though Shulaker built the chip using the limited capabilities of an academic fabrication facility, the peak bandwidth between vertical layers of the chip could potentially approach and exceed the peak memory bandwidth of the most sophisticated silicon-based technologies available today.

System benefits

This provides several simultaneous benefits for future computing systems.

"As a result, the chip is able to store massive amounts of data and perform on-chip processing to transform a data deluge into useful information," Mitra says.

Energy efficiency is another benefit. "Logic made from carbon nanotubes will be ten times more energy efficient as today's logic made from silicon," Wong said. "RRAM can also be denser, faster and more energy-efficient than the memory we use today."

Thanks to the ground-breaking approach embodied by the Nature paper, the work is getting attention from leading scientists who are not directly connected with the research. Jan Rabaey, a professor of electrical engineering and computer sciences at the University of California, Berkeley, said 3-D chip architecture is such a fundamentally different approach that it may have other, more futuristic benefits to the advance of computing. "These [3-D] structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets," Rabaey said, adding, "The approach presented by the authors is definitely a great first step in that direction."

 

This work was funded by the Defense Advanced Research Projects Agency, National Science Foundation, Semiconductor Research Corporation, STARnet SONIC and member companies of the Stanford SystemX Alliance.


 

This story is a revised version of a press release by MIT News correspondent Helen Knight.

August 2017

The next generation of feature-filled and energy-efficient electronics will require computer chips just a few atoms thick. For all its positive attributes, trusty silicon can't take us to these ultrathin extremes.

Now, electrical engineers at Stanford have identified two semiconductors – hafnium diselenide and zirconium diselenide – that share or even exceed some of silicon's desirable traits, starting with the fact that all three materials can "rust."

"It's a bit like rust, but a very desirable rust," said Eric Pop, an associate professor of electrical engineering, who co-authored with post-doctoral scholar Michal Mleczko a paper that appears in the journal Science Advances.

The new materials can also be shrunk to functional circuits just three atoms thick and they require less energy than silicon circuits. Although still experimental, the researchers said the materials could be a step toward the kinds of thinner, more energy-efficient chips demanded by devices of the future.

Silicon's strengths
Silicon has several qualities that have led it to become the bedrock of electronics, Pop explained. One is that it is blessed with a very good "native" insulator, silicon dioxide or, in plain English, silicon rust.

Exposing silicon to oxygen during manufacturing gives chip-makers an easy way to isolate their circuitry. Other semiconductors do not "rust" into good insulators when exposed to oxygen, so they must be layered with additional insulators, a step that introduces engineering challenges. Both of the diselenides the Stanford group tested formed this elusive, yet high-quality insulating rust layer when exposed to oxygen.

Not only do both ultrathin semiconductors rust, they do so in a way that is even more desirable than silicon. They form what are called "high-K" insulators, which enable lower power operation than is possible with silicon and its silicon oxide insulator.

As the Stanford researchers started shrinking the diselenides to atomic thinness, they realized that these ultrathin semiconductors share another of silicon's secret advantages: the energy needed to switch transistors on – a critical step in computing, called the band gap – is in a just-right range. Too low and the circuits leak and become unreliable. Too high and the chip takes too much energy to operate and becomes inefficient. Both materials were in the same optimal range as silicon.

All this and the diselenides can also be fashioned into circuits just three atoms thick, or about two-thirds of a nanometer, something silicon cannot do.

The combination of thinner circuits and desirable high-K insulation means that these ultrathin semiconductors could be made into transistors 10 times smaller than anything possible with silicon today.

"Silicon won't go away. But for consumers this could mean much longer battery life and much more complex functionality if these semiconductors can be integrated with silicon," Pop said.

More work to do
There is much work ahead. First, Mleczko and Pop must refine the electrical contacts between transistors on their ultrathin diselenide circuits. "These connections have always proved a challenge for any new semiconductor, and the difficulty becomes greater as we shrink circuits to the atomic scale," Mleczko said.

They are also working to better control the oxidized insulators to ensure they remain as thin and stable as possible. Last, but not least, only when these things are in order will they begin to integrate with other materials and then to scale up to working wafers, complex circuits and, eventually, complete systems.

"There's more research to do, but a new path to thinner, smaller circuits – and more energy-efficient electronics – is within reach," Pop said.


 

 

Reprinted from Stanford Magazine, "New semiconductor materials exceed some of silicon's 'secret' powers," August 14,2017

October 2017

It looks like a regular roof, but the top of the Packard Electrical Engineering Building at Stanford University has been the setting of many milestones in the development of an innovative cooling technology that could someday be part of our everyday lives.

Since 2013, Shanhui Fan, professor of electrical engineering, and his students and research associates have employed this roof as a testbed for a high-tech mirror-like optical surface that could be the future of lower-energy air conditioning and refrigeration.

Research published in 2014 first showed the cooling capabilities of the optical surface on its own. Now, Fan and former research associates Aaswath Raman and Eli Goldstein, have shown that a system involving these surfaces can cool flowing water to a temperature below that of the surrounding air. The entire cooling process is done without electricity.

"This research builds on our previous work with radiative sky cooling but takes it to the next level. It provides for the first time a high-fidelity technology demonstration of how you can use radiative sky cooling to passively cool a fluid and, in doing so, connect it with cooling systems to save electricity," said Raman, who is co-lead author of the paper detailing this research, published in Nature Energy Sept. 4, 2017.

Together, Fan, Goldstein and Raman have founded the company SkyCool Systems, which is working on further testing and commercializing this technology.

Radiative sky cooling is a natural process that everyone and everything does, resulting from the moments of molecules releasing heat. You can witness it for yourself in the heat that comes off a road as it cools after sunset. This phenomenon is particularly noticeable on a cloudless night because, without clouds, the heat we and everything around us radiates can more easily make it through Earth's atmosphere, all the way to the vast, cold reaches of space.

"If you have something that is very cold – like space – and you can dissipate heat into it, then you can do cooling without any electricity or work. The heat just flows," explained Fan, who is senior author of the paper. "For this reason, the amount of heat flow off the Earth that goes to the universe is enormous."

Although our own bodies release heat through radiative cooling to both the sky and our surroundings, we all know that on a hot, sunny day, radiative sky cooling isn't going to live up to its name. This is because the sunlight will warm you more than radiative sky cooling will cool you. To overcome this problem, the team's surface uses a multilayer optical film that reflects about 97 percent of the sunlight while simultaneously being able to emit the surface's thermal energy through the atmosphere. Without heat from sunlight, the radiative sky cooling effect can enable cooling below the air temperature even on a sunny day.

A fluid-cooling panel designed at Stanford being tested on the roof of the Packard Electrical Engineering Building
Photo credit: Aaswath Raman

The experiments published in 2014 were performed using small wafers of a multilayer optical surface, about 8 inches in diameter, and only showed how the surface itself cooled. Naturally, the next step was to scale up the technology and see how it works as part of a larger cooling system.

Putting radiative sky cooling to work
For their latest paper, the researchers created a system where panels covered in the specialized optical surfaces sat atop pipes of running water and tested it on the roof of the Packard Building in September 2015. These panels were slightly more than 2 feet in length on each side and the researchers ran as many as four at a time. With the water moving at a relatively fast rate, they found the panels were able to consistently reduce the temperature of the water 3 to 5 degrees Celsius below ambient air temperature over a period of three days.

The researchers also applied data from this experiment to a simulation where their panels covered the roof of a two-story commercial office building in Las Vegas – a hot, dry location where their panels would work best – and contributed to its cooling system. They calculated how much electricity they could save if, in place of a conventional air-cooled chiller, they used vapor-compression system with a condenser cooled by their panels. They found that, in the summer months, the panel-cooled system would save 14.3 megawatt-hours of electricity, a 21 percent reduction in the electricity used to cool the building. Over the entire period, the daily electricity savings fluctuated from 18 percent to 50 percent.

Broad applicability in the years to come
Right now, SkyCool Systems is measuring the energy saved when panels are integrated with traditional air conditioning and refrigeration systems at a test facility, and Fan, Goldstein and Raman are optimistic that this technology will find broad applicability in the years to come.

The researchers are focused on making their panels integrate easily with standard air conditioning and refrigeration systems and they are particularly excited at the prospect of applying their technology to the serious task of cooling data centers.

Fan has also carried out research on various other aspects of radiative cooling technology. He and Raman have applied the concept of radiative sky cooling to the creation of an efficiency-boosting coating for solar cells. With Yi Cui, a professor of materials science and engineering at Stanford and of photon science at SLAC National Accelerator Laboratory, Fan developed a cooling fabric.

"It's very intriguing to think about the universe as such an immense resource for cooling and all the many interesting, creative ideas that one could come up with to take advantage of this," he said. 


 

Reprinted from Stanford Engineering Magazine, "How a new cooling system works without using any electricity" September 8, 2017.

September 2017

Our phones and devices simply tell us where to go — and how long it will take to get there. But what are the risks? In the Future of Everything radio show, Professor Per Enge, Aeronautics and Astronautics, EE by Courtesy, discusses the accuracy of the system, how to keep the signals safe, and how systems will continue to improve.

 

In partnership with SiriusXM, Stanford University launched Stanford Radio, a new university-based pair of radio programs. The programs are produced in collaboration with the School of Engineering and the Graduate School of Education.

"The Future of Everything" is from the School of Engineering and "School's In" is from the Graduate School of Education.

September 2017


In the Future of Everything radio show, Kwabena Boahen discusses the evolution of computers and how the next big step forward will be to design chips that behave more like the human brain.

Boahen is a professor of bioengineering and electrical engineering, exploring in his lab how these chips can interface with drones or with the human brain. "It's really early days," he says.


In partnership with SiriusXM, Stanford University launched Stanford Radio, a new university-based pair of radio programs. The programs are produced in collaboration with the School of Engineering and the Graduate School of Education.

"The Future of Everything" is from the School of Engineering and "School's In" is from the Graduate School of Education.

 

Pages

Subscribe to RSS - research