research

July 2017

One day soon we may live in smart houses that cater to our habits and needs, or ride in autonomous cars that rely on embedded sensors to provide safety and convenience. But today's electronic devices may not be able handle the deluge of data such applications portend because of limitations in their materials and design, according to the authors of a Stanford-led experiment recently published in Nature.

To begin with, silicon transistors are no longer improving at their historic rate, which threatens to end the promise of smaller, faster computing known as Moore's Law. A second and related reason is computer design, say senior authors and Stanford EE professors Subhasish Mitra and H.-S. Philip Wong. Today's computers rely on separate logic and memory chips. These are laid out in two dimensions, like houses in a suburb, and connected by tiny wires, or interconnects, that become bottlenecked with data traffic.

Now, the Stanford team has created a chip that breaks this bottleneck in two ways: first, by using nanomaterials not based on silicon for both logic and memory, and second, by stacking these computation and storage layers vertically, like floors in a high-rise, with a plethora of elevator-like interconnects between the "floors" to eliminate delays. "This is the largest and most complex nanoelectronic system that has so far been made using the materials and nanotechnologies that are emerging to leapfrog silicon," said Mitra.

The team, whose other Stanford members include EE professors Roger Howe and Krishna Saraswat, integrated over 2 million non-silicon transistors and 1 million memory cells, in addition to on-chip sensors for detecting gases – a proof of principle for other tasks yet to be devised. "Electronic devices of these materials and three-dimensional design could ultimately give us computational systems 1,000 times more energy-efficient than anything we can build of silicon," Wong said.

First author Max Shulaker (PhD '16), who performed this work while a PhD candidate, is now an assistant professor at MIT and core member of its Microsystems Technology Laboratories. He explained in a single word why the team had to use emerging nanotechnologies and not conventional silicon technologies to achieve the high-rise design: heat. "Building silicon transistors involves temperatures of over 1,000 degrees Celsius," Shulaker said. "If you try to build a second layer on top of the first, you'll damage the bottom layer. This is why chips today have a single layer of circuitry."

The magic of the materials

The new prototype chip is a radical change from today's chips because it uses multiple nanotechnologies that can be fabricated at relatively low heat, Shulaker explained. Instead of relying on silicon-based transistors, the new chip uses carbon nanotubes, or CNTs, to perform computations. CNTs are sheets of 2-D carbon formed into nanocylinders. The new Naturepaper incorporates prior ground-breaking work by this team in developing the world's first all-CNT computer.

The memory component of the new chip also relied on new processes and materials improved upon by this team. Called resistive random-access memory (RRAM), this is a type of nonvolatile memory — meaning that it doesn't lose data when the power is turned off – that operates by changing the resistance of a solid dielectric material.

The key in this work is that CNT circuits and RRAM memory can be fabricated at temperatures below 200 Celsius. "This means they can be built up in layers without harming the circuits beneath," Shulaker says. "This truly is a remarkable feat of engineering," says Barbara De Salvo, scientific director at CEA-LETI, France, an international expert not connected with this project.

The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. By inserting a plethora of wires between these layers, this 3-D architecture promises to address the communication bottleneck. "In addition to improved devices, 3-D integration can address another key consideration in systems: the interconnects within and between chips," Saraswat said.

To demonstrate the potential of the technology, the researchers placed over a million carbon nanotube-based sensors on the surface of the chip, which they used to detect and classify ambient gases.

Due to the layering of sensing, data storage and computing, the chip was able to measure each of the sensors in parallel and then write directly into its memory, generating huge bandwidth without risk of hitting a bottleneck, because the 3-D design made it unnecessary to move data between chips. In fact, even though Shulaker built the chip using the limited capabilities of an academic fabrication facility, the peak bandwidth between vertical layers of the chip could potentially approach and exceed the peak memory bandwidth of the most sophisticated silicon-based technologies available today.

System benefits

This provides several simultaneous benefits for future computing systems.

"As a result, the chip is able to store massive amounts of data and perform on-chip processing to transform a data deluge into useful information," Mitra says.

Energy efficiency is another benefit. "Logic made from carbon nanotubes will be ten times more energy efficient as today's logic made from silicon," Wong said. "RRAM can also be denser, faster and more energy-efficient than the memory we use today."

Thanks to the ground-breaking approach embodied by the Nature paper, the work is getting attention from leading scientists who are not directly connected with the research. Jan Rabaey, a professor of electrical engineering and computer sciences at the University of California, Berkeley, said 3-D chip architecture is such a fundamentally different approach that it may have other, more futuristic benefits to the advance of computing. "These [3-D] structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets," Rabaey said, adding, "The approach presented by the authors is definitely a great first step in that direction."

 

This work was funded by the Defense Advanced Research Projects Agency, National Science Foundation, Semiconductor Research Corporation, STARnet SONIC and member companies of the Stanford SystemX Alliance.


 

This story is a revised version of a press release by MIT News correspondent Helen Knight.

August 2017

The next generation of feature-filled and energy-efficient electronics will require computer chips just a few atoms thick. For all its positive attributes, trusty silicon can't take us to these ultrathin extremes.

Now, electrical engineers at Stanford have identified two semiconductors – hafnium diselenide and zirconium diselenide – that share or even exceed some of silicon's desirable traits, starting with the fact that all three materials can "rust."

"It's a bit like rust, but a very desirable rust," said Eric Pop, an associate professor of electrical engineering, who co-authored with post-doctoral scholar Michal Mleczko a paper that appears in the journal Science Advances.

The new materials can also be shrunk to functional circuits just three atoms thick and they require less energy than silicon circuits. Although still experimental, the researchers said the materials could be a step toward the kinds of thinner, more energy-efficient chips demanded by devices of the future.

Silicon's strengths
Silicon has several qualities that have led it to become the bedrock of electronics, Pop explained. One is that it is blessed with a very good "native" insulator, silicon dioxide or, in plain English, silicon rust.

Exposing silicon to oxygen during manufacturing gives chip-makers an easy way to isolate their circuitry. Other semiconductors do not "rust" into good insulators when exposed to oxygen, so they must be layered with additional insulators, a step that introduces engineering challenges. Both of the diselenides the Stanford group tested formed this elusive, yet high-quality insulating rust layer when exposed to oxygen.

Not only do both ultrathin semiconductors rust, they do so in a way that is even more desirable than silicon. They form what are called "high-K" insulators, which enable lower power operation than is possible with silicon and its silicon oxide insulator.

As the Stanford researchers started shrinking the diselenides to atomic thinness, they realized that these ultrathin semiconductors share another of silicon's secret advantages: the energy needed to switch transistors on – a critical step in computing, called the band gap – is in a just-right range. Too low and the circuits leak and become unreliable. Too high and the chip takes too much energy to operate and becomes inefficient. Both materials were in the same optimal range as silicon.

All this and the diselenides can also be fashioned into circuits just three atoms thick, or about two-thirds of a nanometer, something silicon cannot do.

The combination of thinner circuits and desirable high-K insulation means that these ultrathin semiconductors could be made into transistors 10 times smaller than anything possible with silicon today.

"Silicon won't go away. But for consumers this could mean much longer battery life and much more complex functionality if these semiconductors can be integrated with silicon," Pop said.

More work to do
There is much work ahead. First, Mleczko and Pop must refine the electrical contacts between transistors on their ultrathin diselenide circuits. "These connections have always proved a challenge for any new semiconductor, and the difficulty becomes greater as we shrink circuits to the atomic scale," Mleczko said.

They are also working to better control the oxidized insulators to ensure they remain as thin and stable as possible. Last, but not least, only when these things are in order will they begin to integrate with other materials and then to scale up to working wafers, complex circuits and, eventually, complete systems.

"There's more research to do, but a new path to thinner, smaller circuits – and more energy-efficient electronics – is within reach," Pop said.


 

 

Reprinted from Stanford Magazine, "New semiconductor materials exceed some of silicon's 'secret' powers," August 14,2017

October 2017

It looks like a regular roof, but the top of the Packard Electrical Engineering Building at Stanford University has been the setting of many milestones in the development of an innovative cooling technology that could someday be part of our everyday lives.

Since 2013, Shanhui Fan, professor of electrical engineering, and his students and research associates have employed this roof as a testbed for a high-tech mirror-like optical surface that could be the future of lower-energy air conditioning and refrigeration.

Research published in 2014 first showed the cooling capabilities of the optical surface on its own. Now, Fan and former research associates Aaswath Raman and Eli Goldstein, have shown that a system involving these surfaces can cool flowing water to a temperature below that of the surrounding air. The entire cooling process is done without electricity.

"This research builds on our previous work with radiative sky cooling but takes it to the next level. It provides for the first time a high-fidelity technology demonstration of how you can use radiative sky cooling to passively cool a fluid and, in doing so, connect it with cooling systems to save electricity," said Raman, who is co-lead author of the paper detailing this research, published in Nature Energy Sept. 4, 2017.

Together, Fan, Goldstein and Raman have founded the company SkyCool Systems, which is working on further testing and commercializing this technology.

Radiative sky cooling is a natural process that everyone and everything does, resulting from the moments of molecules releasing heat. You can witness it for yourself in the heat that comes off a road as it cools after sunset. This phenomenon is particularly noticeable on a cloudless night because, without clouds, the heat we and everything around us radiates can more easily make it through Earth's atmosphere, all the way to the vast, cold reaches of space.

"If you have something that is very cold – like space – and you can dissipate heat into it, then you can do cooling without any electricity or work. The heat just flows," explained Fan, who is senior author of the paper. "For this reason, the amount of heat flow off the Earth that goes to the universe is enormous."

Although our own bodies release heat through radiative cooling to both the sky and our surroundings, we all know that on a hot, sunny day, radiative sky cooling isn't going to live up to its name. This is because the sunlight will warm you more than radiative sky cooling will cool you. To overcome this problem, the team's surface uses a multilayer optical film that reflects about 97 percent of the sunlight while simultaneously being able to emit the surface's thermal energy through the atmosphere. Without heat from sunlight, the radiative sky cooling effect can enable cooling below the air temperature even on a sunny day.

A fluid-cooling panel designed at Stanford being tested on the roof of the Packard Electrical Engineering Building
Photo credit: Aaswath Raman

The experiments published in 2014 were performed using small wafers of a multilayer optical surface, about 8 inches in diameter, and only showed how the surface itself cooled. Naturally, the next step was to scale up the technology and see how it works as part of a larger cooling system.

Putting radiative sky cooling to work
For their latest paper, the researchers created a system where panels covered in the specialized optical surfaces sat atop pipes of running water and tested it on the roof of the Packard Building in September 2015. These panels were slightly more than 2 feet in length on each side and the researchers ran as many as four at a time. With the water moving at a relatively fast rate, they found the panels were able to consistently reduce the temperature of the water 3 to 5 degrees Celsius below ambient air temperature over a period of three days.

The researchers also applied data from this experiment to a simulation where their panels covered the roof of a two-story commercial office building in Las Vegas – a hot, dry location where their panels would work best – and contributed to its cooling system. They calculated how much electricity they could save if, in place of a conventional air-cooled chiller, they used vapor-compression system with a condenser cooled by their panels. They found that, in the summer months, the panel-cooled system would save 14.3 megawatt-hours of electricity, a 21 percent reduction in the electricity used to cool the building. Over the entire period, the daily electricity savings fluctuated from 18 percent to 50 percent.

Broad applicability in the years to come
Right now, SkyCool Systems is measuring the energy saved when panels are integrated with traditional air conditioning and refrigeration systems at a test facility, and Fan, Goldstein and Raman are optimistic that this technology will find broad applicability in the years to come.

The researchers are focused on making their panels integrate easily with standard air conditioning and refrigeration systems and they are particularly excited at the prospect of applying their technology to the serious task of cooling data centers.

Fan has also carried out research on various other aspects of radiative cooling technology. He and Raman have applied the concept of radiative sky cooling to the creation of an efficiency-boosting coating for solar cells. With Yi Cui, a professor of materials science and engineering at Stanford and of photon science at SLAC National Accelerator Laboratory, Fan developed a cooling fabric.

"It's very intriguing to think about the universe as such an immense resource for cooling and all the many interesting, creative ideas that one could come up with to take advantage of this," he said. 


 

Reprinted from Stanford Engineering Magazine, "How a new cooling system works without using any electricity" September 8, 2017.

September 2017

Our phones and devices simply tell us where to go — and how long it will take to get there. But what are the risks? In the Future of Everything radio show, Professor Per Enge, Aeronautics and Astronautics, EE by Courtesy, discusses the accuracy of the system, how to keep the signals safe, and how systems will continue to improve.

 

In partnership with SiriusXM, Stanford University launched Stanford Radio, a new university-based pair of radio programs. The programs are produced in collaboration with the School of Engineering and the Graduate School of Education.

"The Future of Everything" is from the School of Engineering and "School's In" is from the Graduate School of Education.

September 2017


In the Future of Everything radio show, Kwabena Boahen discusses the evolution of computers and how the next big step forward will be to design chips that behave more like the human brain.

Boahen is a professor of bioengineering and electrical engineering, exploring in his lab how these chips can interface with drones or with the human brain. "It's really early days," he says.


In partnership with SiriusXM, Stanford University launched Stanford Radio, a new university-based pair of radio programs. The programs are produced in collaboration with the School of Engineering and the Graduate School of Education.

"The Future of Everything" is from the School of Engineering and "School's In" is from the Graduate School of Education.

 

Kai Zang (PhD '17)
October 2017

Kai Zang's (PhD '17) paper published in Nature Communications describes how nanotextured silicon can absorb more photons, furthering the effectiveness of solar cells. This research also resulted in a second discovery – improving the collision-avoidance technology in vehicles.

Professor Jim Harris said he always thought Zang's texturing technique was a good way to improve solar cells. "But the huge ramp up in autonomous vehicles and LIDAR suddenly made this 100 times more important," he says.

The researchers figured out how to create a very thin layer of silicon that could absorb as many photons as a much thicker layer of the costly material. Specifically, rather than laying the silicon flat, they nanotextured the surface of the silicon in a way that created more opportunities for light particles to be absorbed. Their technique increased photon absorption rates for the nanotextured solar cells compared to traditional thin silicon cells, making more cost-effective use of the material.

After the researchers shared these efficiency figures, engineers working on autonomous vehicles began asking whether this texturing technique could help them get more accurate results from a collision-avoidance technology called LIDAR, which is conceptually like sonar except that it uses light rather than sound waves to detect objects in the car's travel path.

In their Nature Communications paper, the team reports that their textured silicon can capture as many as three to six times more of the returning photons than today's LIDAR receivers. They believe this will enable self-driving car engineers to design high-performance, next-generation LIDAR systems that would continuously send out a single laser pulse in all directions. The reflected photons would be captured by an array of textured silicon detectors, creating moment-to-moment maps of pedestrian-filled city crosswalks.

Harris said the texturing technology could also help to solve two other LIDAR snags unique to self-driving cars – potential distortions caused by heat and the machine equivalent of peripheral vision. The Harris Group research website. 

 

 

Excerpted from "A new way to improve solar cells can also benefit self-driving cars," Stanford Engineering, October 2, 2017.

Orly Liba (PhD candidate ’18)
July 2017

Orly Liba (PhD candidate '18) is the lead author of a study published in Nature Communications. Her advisor, Professor Adam de la Zerda and fellow researchers have devised a way to improve the quality of images obtained through optical coherence tomography (OCT).

The relatively simple, low-cost fix — entailing a pair of lenses, a piece of ground glass and some software tweaks — erases blemishes that have bedeviled images obtained via OCT since its invention in 1991. This improvement, combined with the technology's ability to optically penetrate up to 2 millimeters into tissue, could enable physicians to perform "virtual biopsies," visualizing tissue in three dimensions at microscope-quality resolution without excising any tissue from patients.

Their study describes how the researchers tested the enhancement in two different commercially available OCT devices. They were able to view cell-scale features in intact tissues, including in a mouse's ear, retina and cornea, as well as Meissner's corpuscle, found in the skin of a human fingertip.

"We saw sebaceous glands, hair follicles, blood vessels, lymph vessels and more," Liba said.

Other Stanford co-authors of the study are former postdoctoral scholars Matthew Lew, PhD, and Debasish Sen, PhD; graduate student Elliott SoRelle; research assistant Rebecca Dutta; professor of ophthalmology Darius Moshfeghi, MD; and professor of physics and of molecular and cellular physiology Steven Chu, PhD.

 

 

Excerpted from "Scientists turbocharge high-resolution, 3-D imaging," published on Stanford Medicine's News Center, June 20, 2017

Professor Gordon Wetzstein, left; postdoctoral research fellow Donald Dansereau (Image credit: L.A. Cicero)
July 2017

A new 4D camera designed by Professor Gordon Wetzstein and postdoc Dr. Donald Dansereau captures light field information over a 138° field of view.

The difference between looking through a normal camera and the new design is like the difference between looking through a peephole and a window, the scientists said.

"A 2D photo is like a peephole because you can't move your head around to gain more information about depth, translucency or light scattering," Dansereau said. "Looking through a window, you can move and, as a result, identify features like shape, transparency and shininess."

That additional information comes from a type of photography called light field photography, first described in 1996 by EE Professors Marc Levoy and Pat Hanrahan. Light field photography captures the same image as a conventional 2D camera plus information about the direction and distance of the light hitting the lens, creating what's known as a 4D image. A well-known feature of light field photography is that it allows users to refocus images after they are taken because the images include information about the light position and direction. Robots might use this to see through rain and other things that could obscure their vision.

The extremely wide field of view, which encompasses nearly a third of the circle around the camera, comes from a specially designed spherical lens. However, this lens also produced a significant hurdle: how to translate a spherical image onto a flat sensor. Previous approaches to solving this problem had been heavy and error prone, but combining the optics and fabrication expertise of UCSD and the signal processing and algorithmic expertise of Wetzstein's lab resulted in a digital solution to this problem that not only leads to the creation of these extra-wide images but enhances them.

This camera system's wide field of view, detailed depth information and potential compact size are all desirable features for imaging systems incorporated in wearables, robotics, autonomous vehicles and augmented and virtual reality.

"Many research groups are looking at what we can do with light fields but no one has great cameras. We have off-the-shelf cameras that are designed for consumer photography," said Dansereau. "This is the first example I know of a light field camera built specifically for robotics and augmented reality. I'm stoked to put it into peoples' hands and to see what they can do with it."

 

Two 138° light field panoramas and a depth estimate of the second panorama. (Image credit: Stanford Computational Imaging Lab and Photonic Systems Integration Laboratory at UC San Diego)

 


Read more at Professor Wetztein's research site, Stanford Computational Imaging Lab.

Excerpted from Stanford News, "New camera designed by Stanford researchers could improve robot vision and virtual reality," July 21, 2017.

June 2017

If electric cars could recharge while driving down a highway, it would virtually eliminate concerns about their range and lower their cost, perhaps making electricity the standard fuel for vehicles.

Now Stanford University scientists have overcome a major hurdle to such a future by wirelessly transmitting electricity to a nearby moving object. Their results are published in the June 15 edition of Nature.

"In addition to advancing the wireless charging of vehicles and personal devices like cellphones, our new technology may untether robotics in manufacturing, which also are on the move," said Shanhui Fan, a professor of electrical engineering and senior author of the study. "We still need to significantly increase the amount of electricity being transferred to charge electric cars, but we may not need to push the distance too much more."

The group built on existing technology developed in 2007 at MIT for transmitting electricity wirelessly over a distance of a few feet to a stationary object. In the new work, the team transmitted electricity wirelessly to a moving LED lightbulb. That demonstration only involved a 1-milliwatt charge, whereas electric cars often require tens of kilowatts to operate. The team is now working on greatly increasing the amount of electricity that can be transferred, and tweaking the system to extend the transfer distance and improve efficiency.

"We can rethink how to deliver electricity not only to our cars, but to smaller devices on or in our bodies," Fan said. "For anything that could benefit from dynamic, wireless charging, this is potentially very important."

The study was also co-authored by former Stanford research associate Xiaofang Yu. Part of the work was supported by the TomKat Center for Sustainable Energy at Stanford.

 

Excerpted from Stanford News, "Wireless charging of moving electric vehicles overcomes major hurdle in new Stanford research," June 14, 2017.

May 2017

A research team led by EE professor Jelena Vuckovic, has spent the past several years working toward the development of nanoscale lasers and quantum technologies that might someday enable conventional computers to communicate faster and more securely using light instead of electricity. Vuckovic and her team, including Kevin Fischer, a doctoral candidate and lead author of a paper describing the project, believe that a modified nanoscale laser can be used to efficiently generate quantum light for fully protected quantum communication. "Quantum networks have the potential for secure end-to-end communication wherein the information channel is secured by the laws of quantum physics," states PhD candidate Kevin Fischer.

Signal processing is helping the IoT and other network technologies to operate faster, more efficiently, and very reliably. Advanced research also promises to open new opportunities in key areas, such as highly secure communication and various types of wireless networks.

The biggest challenge the researchers have faced so far is dealing with the fact that quantum light is far weaker than the rest of the light emitted by a modified laser, making it difficult to detect. Addressing this obstacle, the team developed a method to filter out the unwanted light, enabling the quantum signal to be read much better. "Some of the light coming back from the modified laser is like noise, preventing us from seeing the quantum light," Fischer says. "We canceled it out to reveal and emphasize the quantum signal hidden beneath."

Despite being a promising demonstration of revealing the quantum light, the technique is not yet ready for large-scale deployment. The Vuckovic group is working on scaling the technique for reliable application in a quantum network.

 

Excerpted from "A Networking Revolution Powered by Signal Processing," IEEE Signal Processing Magazine, January 2017.
Read full article (opens PDF)

Pages

Subscribe to RSS - research