research

image of published researcher Anastasios Angelopolous, EE BS'19
August 2019

Anastasios Angelopolous (BS '19), et al, recently published a paper titled, "Enhanced Depth Navigation Through Augmented Reality Depth Mapping in Patients with Low Vision." It was published in Nature Research journal Scientific Reports August 2, 2019. The paper describes the use of augmented reality (AR) to assist those diagnosed with retinitis pigmentosa (RP).

After his freshman year, Anastasios started working with USC Professor Mark Humayun, initially focusing on artificial retinal technology. However, in the following two and a half years, their research expanded to explore the possibility of using augmented reality as a way to help people with low vision navigate safely through complex environments.

They combined special glasses and software, which scans an environment, then projects onto the wearer's retina the corresponding obstacles. The team found that the use of their unique AR visual aid reduced collisions by 50% in mobility testing, and by 70% in grasp testing. This striking result is the first to prove clinically that augmented reality can help people with low vision live more independent lives.

Anastasios and team hope that work like this can help people with low vision increase their independence through mobility. They plan to continue their research to include other modalities, such as audio and haptics.

Please join us in congratulating Anastasios and team on the publication of their research work!
This year Anastasios received the Terman Scholastic Achievement Award and completed his BS in Electrical Engineering in an accelerated timeframe.

 

Related Links


Additional Authors:
Dr. & Prof. Hossein Ameri, USC Ophthalmology (bio link)
Dr. & Prof. Mark Humayun, USC Institute for Biomedical Therapeutics (IBT) (bio link)
Dr. & Prof. Debbie Mitra, USC Institute for Biomedical Therapeutics (IBT) (bio link)

Paper Abstract:
Patients diagnosed with Retinitis Pigmentosa (RP) show, in the advanced stage of the disease, severely restricted peripheral vision causing poor mobility and decline in quality of life. This vision loss causes difficulty identifying obstacles and their relative distances. Thus, RP patients use mobility aids such as canes to navigate, especially in dark environments. A number of high-tech visual aids using virtual reality (VR) and sensory substitution have been developed to support or supplant traditional visual aids. These have not achieved widespread use because they are difficult to use or block off residual vision. This paper presents a unique depth to high-contrast pseudocolor mapping overlay developed and tested on a Microsoft Hololens 1 as a low vision aid for RP patients. A single-masked and randomized trial of the AR pseudocolor low vision aid to evaluate real world mobility and near obstacle avoidance was conducted consisting of 10 RP subjects. An FDA-validated functional obstacle course and a custom-made grasping setup were used. The use of the AR visual aid reduced collisions by 50% in mobility testing (p = 0.02), and by 70% in grasp testing (p = 0.03). This paper introduces a new technique, the pseudocolor wireframe, and reports the first significant statistics showing improvements for the population of RP patients with mobility and grasp.

image of Professor Subhasish Mitra
July 2019

In a recent QandA discussion with Stanford Engineering, EE professor Subhasish Mitra and Computer Science professor Clark Barrett, describe their recent work to secure chips before they are manufactured.

What's new when it comes to finding bugs in chips?

Designers have always tried to find logic flaws, or bugs as they are called, before chips went into manufacturing. Otherwise, hackers might exploit these flaws to hijack computers or cause malfunctions. This has been called debugging and it has never been easy. Yet we are now starting to discover a new type of chip vulnerability that is different from so-called bugs. These new weaknesses do not arise from logic flaws. Instead, hackers can figure out how to misuse a feature that has been purposely designed into a chip. There is not a flaw in the logic. But hackers might be able to pervert the logic to steal sensitive data or take over the chip.

How do your algorithms deal with traditional bugs and these new unintended weaknesses?

Let's start with the traditional bugs. We developed a technique called Symbolic Quick Error Detection — or Symbolic QED. Essentially, we use new algorithms to examine chip designs for potential logic flaws or bugs. We recently tested our algorithms on 16 processors that were already being used to help control critical automotive systems like braking and steering. Before these chips went into cars, the designers had already spent five years debugging their own processors using state-of-the-art techniques and fixing all the bugs they found. After using Symbolic QED for one month, we found every bug they'd found in 60 months — and then we found some bugs that were still in the chips. This was a validation of our approach. We think that by using Symbolic QED before a chip goes into manufacturing we'll be able to find and fix more logic flaws in less time.

Does Symbolic QED find all vulnerabilities?

Not in its current incarnation. Through collaboration with other research groups, we have modified Symbolic QED to detect new types of attacks that can come from potential misuse of seemingly innocuous features.

This is just the beginning. The processors we tested were relatively simple. Yet, as we saw, they could be perverted. Over time we will develop more sophisticated algorithms to detect and fix the most sophisticated chips, like the ones responsible for controlling navigation systems on autonomous cars. Our message is simple: As we develop more chips for more critical tasks, we'll need automated systems to find and fix all potential vulnerabilities — traditional bugs and unintended consequences — before chips go into manufacturing. Otherwise we'll always be playing catch up, trying to patch chips after hackers find the vulnerabilities.

Excerpted from "Q&A: What's new in the effort to prevent hackers from hijacking chips?"


 

Related 

 

professor Krishna Shenoy
July 2019

 

Professor Krishna Shenoy's research team has found that using statistical theory to analyze neural activity provides a faster and equally accurate process.

Krishna's team has circumvented today's painstaking process of tracking the activity of individual neurons in favor of decoding neural activity in the aggregate. Each time a neuron fires it sends an electrical signal — known as a "spike" — to the next neuron down the line. It's the sort of intercellular communication that turns a notion in the mind into muscle contraction elsewhere in the body. "Each neuron has its own electrical fingerprint and no two are identical," says Eric Trautmann, a postdoctoral researcher in Krishna's lab and first author of the paper. "We spend a lot of time isolating and studying the activity of individual neurons."

The team believes their work will ultimately lead to neural implants that use simpler electronics to track more neurons than ever before, and also do so more accurately. The key is to combine their sophisticated new sampling algorithms with small electrodes. So far, such small electrodes have only been employed to control simple devices like a computer mouse. But combining this hardware for recording brain signals with the sampling algorithms creates new possibilities. Researchers might be able to deploy a network of small electrodes through larger sections of the brain, and use the algorithms to sample a great many neurons. This could deliver enough accurate brain signal information to control a prosthetic hand capable of fast and precise motions like pitching a baseball or playing the violin.

Better yet, Trautmann said, the new electrodes, coupled with the sampling algorithms, should eventually be able to record brain activity without the many wires needed today to carry signals from the brain to whatever computer controls the prosthesis. Wireless functionality would completely untether users from bulky computers needed to decode neuronal activity today.

Krishna reports, "This study has a bit of a hopeful message in that observing activity in the brain turns out to be easier than we initially expected."

The paper, "Accurate Estimation of Neural Population Dynamics without Spike Sorting" was published in June's issue of Neuron.

Excerpted from Stanford Engineering news

Related Links

July 2019

Professor Gordon Wetzstein and team recently published their findings in Science Advances.

The researchers have created a pair of smart glasses that can automatically focus on what you're looking at. Using eye-trackers and autofocus lenses, the prototype works much like the lens of the eye, with fluid-filled lenses that bulge and thin as the field of vision changes. It also includes eye-tracking sensors that triangulate where a person is looking and determine the precise distance to the object of interest. The team did not invent these lenses or eye-trackers, but they did develop the software system that harnesses this eye-tracking data to keep the fluid-filled lenses in constant and perfect focus.

EE PhD candidate Nitish Padmanaban, said other teams had previously tried to apply autofocus lenses to presbyopia. But without guidance from the eye-tracking hardware and system software, those earlier efforts were no better than wearing traditional progressive lenses.

Gordon's team tested the prototype on 56 people with presbyopia. Test subjects said the autofocus lenses performed better and faster at reading and other tasks. Wearers also tended to prefer the autofocal glasses to the experience of progressive lenses – bulk and weight aside.

Gordon's Computational Imaging Lab is at the forefront of vision systems for VR and AR (virtual and augmented reality). It was in the course of such work that the researchers became aware of the new autofocus lenses and eye-trackers and had the insight to combine these elements to create a potentially transformative product.

Excerpted from Stanford News.

 

Related Links

 


 

 

image of EE and CS professor Dorsa Sadigh
July 2019

Professor Dorsa Sadigh and her lab have combined two different ways of setting goals for robots into a single process, which performed better than either of its parts alone in both simulations and real-world experiments. The researchers presented their findings at the 2019 Robotics: Science & Systems (RSS) Conference.

The team has coined their approach, "DemPref" – DemPref uses both demonstrations and preference queries to learn a reward function. Specifically, "(1) using the demonstrations to learn a coarse prior over the space of reward functions, to reduce the effective size of the space from which queries are generated; and (2) use the demonstrations to ground the (active) query generation process, to improve the quality of the generated queries. Our method alleviates the efficiency issues faced by standard preference-based learning methods and does not exclusively depend on (possibly low-quality) demonstrations," as described in the team's abstract.

The new combination system begins with a person demonstrating a behavior to the robot. That can give autonomous robots a lot of information, but the robot often struggles to determine what parts of the demonstration are important. People also don't always want a robot to behave just like the human that trained it.

"We can't always give demonstrations, and even when we can, we often can't rely on the information people give," said Erdem Biyik, EE PhD candidate, who led the work developing the multiple-question surveys. "For example, previous studies have shown people want autonomous cars to drive less aggressively than they do themselves."

That's where the surveys come in, giving the robot a way of asking, for example, whether the user prefers it move its arm low to the ground or up toward the ceiling. For this study, the group used the slower single question method, but they plan to integrate multiple-question surveys in later work.

In tests, the team found that combining demonstrations and surveys was faster than just specifying preferences and, when compared with demonstrations alone, about 80 percent of people preferred how the robot behaved when trained with the combined system.

 

"This is a step in better understanding what people want or expect from a robot," reports Dorsa. "Our work is making it easier and more efficient for humans to interact and teach robots, and I am excited about taking this work further, particularly in studying how robots and humans might learn from each other."

Excerpted from Stanford News article (link below).

 

Related Links

image of Professor Shan Wang!
April 2019

[Excerpted from Stanford News]

Colorectal cancer is the second leading cause of cancer deaths in the U.S. and a growing problem around the world, but not because it's a particularly difficult cancer to detect and halt. The problem, doctors and researchers believe, is that not enough people are being screened for early signs of the disease, either because they do not know the recommendations or because they are avoiding getting a colonoscopy, which many perceive as an unpleasant procedure.

The current alternatives, said Professor Shan Wang, aren't exactly more pleasant – most of those involve gathering and testing stool samples.

But Shan, his graduate student Jared Nesvet and Uri Ladabaum, a professor of medicine, may have at least a possible solution: a blood test to detect colorectal cancer, which in principle would be less expensive, less invasive and more convenient than colonoscopies and other current tests, the researchers said. Wang and Nesvet have already developed a test that works in the controlled environment of a materials science lab, and now, with help from a Stanford ChEM-H seed grant, the trio are working to validate their approach in the real world of clinical medicine.

[...]

Shan and Nesvet have tested their idea in the lab, and it works well so far, Nesvet said. Now, with help from Ladabaum and the ChEM-H grant, they'll start testing it on blood samples from real patients. Among the questions they'll address are practical ones about how to identify the right people to study, when to draw blood or how to handle the samples.

"That's where we as clinical researchers can help," Ladabaum said.

Shan cautions that a new screen for colon cancer is still a ways off, and that it could involve hundreds, if not thousands, of blood samples before they can be confident their blood test really works. "I expect this will be a five- to 10-year study to bring this technology to fruition," he said.

 

Read full story, "Stanford doctors, materials scientists hope a blood test will encourage more colon cancer screenings."

 


 

Related Links

 

image of professor Eric Pop
April 2019

Professor Eric Pop was featured in a "People Behind the Science" podcast. People Behind the Science's mission is to inspire current and future scientists, share the different paths to a successful career in science, educate the general population on what scientists do, and show the human side of science. In each episode, a different scientist talks about their journey by sharing their successes, failures, and passions.

Excerpts of Eric's conversation follow.
Please visit People Behind the Science for the full episode.


The Scientific Side (timestamp 3:20)

Research in Eric's laboratory spans electronics, electrical engineering, physics, nanomaterials, and energy. They are interested in applying materials with nanoscale properties to engineer better electronics such as transistors, circuits, and data storage mechanisms. Eric is also investigating ways to better manage the heat that electronics generate.

A Dose of Motivation (timestamp 5:17)

Eric is motivated by curiosity and ensuring that the work they do in the lab is useful to people.

Advice For Us All (timestamp 53:40)

Clearly communicating your research is critically important. This includes all forms of communication, whether it is verbal, written, or visual. Before you give a presentation or communicate your work, you should really try to understand your audience. Get a sense of who they are, what they care about, and the best way to convey the cool things you are working on to them. Regardless of what career you choose, being able to share your ideas with people and convince them of the importance of your work will define your career.

 

Related Links

image of Professor Balaji Prabhakar
April 2019

Professors Balaji Prabhakar and Darrell Duffie (GSB) held a moderated conversation about the next generation of finance and high-speed technologies.

Balaji described the accelerating timeframes that gird securities trading infrastructure, where the time from "tick to trade" is now measured in tens of nanoseconds. He also highlighted the potential problems and advantages to be gained by exploiting such lightning-fast speeds. At that nano-scale, it can be hard for networks to properly sequence packets of data being sent over even the faster fiber-optic wires. "If you see a price that's favorable to your trading strategy and you cross the gate ahead of me, then your transactions should happen first," he said. "Unfortunately, in the world where these networks have 'jitters,' this is not easy to guarantee."

The speakers also agreed that one way or another, massive disruption is coming for financial institutions. "There is a mantra that is being repeated on Wall Street, 'We are a tech company that happens to be an investment bank,'" said Balaji. Redefining the role of banks from being consumers of technology to creators of technology will mean that "any bank that's not big enough or not nimble enough is going to lose out," said Duffie.

 

Excerpted from "How is Silicon Valley changing Wall Street?", Stanford Engineering News, April 02, 2019

Watch the conversation in its entirety.

 

Related Links

 

image of professor Tsachy Weissman
March 2019

The project resulted from a collaboration between researchers led by Professor Tsachy Weissman, and three high school students who interned in his lab.

The researchers asked people to compare images produced by a traditional compression algorithm that shrink huge images into pixilated blurs to those created by humans in data-restricted conditions – text-only communication, which could include links to public images. In many cases, the products of human-powered image sharing proved more satisfactory than the algorithm's work. The researchers will present their work at the 2019 Data Compression Conference.

"Almost every image compressor we have today is evaluated using metrics that don't necessarily represent what humans value in an image," said Irena Fischer-Hwang, an EE grad student and co-author of the paper. "It turns out our algorithms have a long way to go and can learn a lot from the way humans share information."

The project resulted from a collaboration between researchers led by Tsachy and three high school students who interned in his lab.

"Honestly, we came into this collaboration aiming to give the students something that wouldn't distract too much from ongoing research," said Weissman. "But they wanted to do more, and that chutzpah led to a paper and a whole new research thrust for the group. This could very well become among the most exciting projects I've ever been involved in."

Weissman stressed the value of the high school students' contribution, even beyond this paper.

"Tens if not hundreds of thousands of human engineering hours went into designing an algorithm that three high schoolers came and kicked its butt," said Weissman. "It's humbling to consider how far we are in our engineering."

Due to the success of this collaboration, Weissman has created a formal summer internship program in his lab for high schoolers. Imagining how an artist or students interested in psychology or neuroscience could contribute to this work, he is particularly keen to bring on students with varied interests and backgrounds.

 

Lead authors of this paper are Ashutosh Bhown of Palo Alto High School, Soham Mukherjee of Monta Vista High School and Sean Yang of Saint Francis High School. Weissman is also a member of Stanford Bio-X and the Wu Tsai Neurosciences Institute.

This research was funded by the National Science Foundation, the National Institutes of Health, the Stanford Compression Forum and Google.

 

 

Excerpted from "Stanford experiment finds humans beat algorithms at image compression", Stanford News, March 25, 2019. 

Professor Subhasish Mitra and Professor H.-S. Philip Wong
February 2019

Computers have shrunk to the size of laptops and smartphones, but engineers want to cram most of the features of a computer into a single chip that they could install just about anywhere. A Stanford-led engineering team has developed the prototype for such a computer-on-a-chip.


Electronic computing was born in the form of massive machines in air-conditioned rooms, migrated to desktops and laptops, and lives today in tiny devices like watches and smartphones.

But why stop there, asks an international team of Stanford-led engineers. Why not build an entire computer onto a single chip? It could have processing circuits, memory storage and power supply to perform a given task, such as measuring moisture in a row of crops. Equipped with machine learning algorithms, the chip could make on-the-spot decisions such as when to water. And with wireless technology it could send and receive data over the internet.

Engineers call this vision of ubiquitous computing the Internet of Everything. But to achieve it they'll need to develop a new class of chips to serve as its foundation.

The researchers unveiled the prototype for such a computer-on-a-chip Feb. 19 at the International Solid-State Circuits Conference in San Francisco. The prototype's data processing and memory circuits uses less than a tenth as much electricity as any comparable electronic device, yet despite its size it is designed to perform many advanced computing feats.

"This is what engineers do," said Subhasish Mitra. "We create a whole that is greater than the sum of its parts."

EE professors Mitra and H.-S. Philip Wong, worked with scientists from the CEA-LETI research institute in Grenoble, France, to design this chip of the future.

New memory is the key

The prototype is built around a new data storage technology called RRAM (resistive random access memory), which has features essential for this new class of chips: storage density to pack more data into less space than other forms of memory; energy efficiency that won't overtax limited power supplies; and the ability to retain data when the chip hibernates, as it is designed to do as an energy-saving tactic.

RRAM has another essential advantage. Engineers can build RRAM directly atop a processing circuit to integrate data storage and computation into a single chip. Stanford researchers have pioneered this concept of uniting memory and processing into one chip because it's faster and more energy efficient than passing data back and forth between separate chips as is the case today. The French team at CEA-LETI was responsible for grafting the RRAM onto a silicon processor.

In order to improve the storage capacity of RRAM, the Stanford group made a number of changes. One was to increase how much information each storage unit, called a cell, can hold. Memory devices typically consist of cells that can store either a zero or a one. The researchers devised a way to pack five values into each memory cell, rather than just the two standard options.

A second enhancement improved the endurance of RRAM. Think about data storage from a chip's point of view: As data is continuously written to a chip's memory cells, they can become exhausted, scrambling data and causing errors. The researchers developed an algorithm to prevent such exhaustion. They tested the endurance of their prototype and found that it should have a 10-year lifespan.

Mitra said the team's computer scientists and electrical engineers worked together to integrate many software and hardware technologies on the prototype, which is currently about the diameter of a pencil eraser. Although that is too large for futuristic, Internet of Everything applications, even now the way that the prototype combines memory and processing could be incorporated into the chips found in smartphones and other mobile devices. Chip manufacturers are already showing interest in this new architecture, which was one of the goals of the Stanford-led team. Mitra said experience gained manufacturing one generation of chips fuels efforts to make the next iteration smaller, faster, cheaper and more capable.

"The SystemX Alliance has allowed a great collaboration between Stanford and CEA-LETI on edge AI application, covering circuit architecture, circuit design, down to advanced technologies," said Emmanuel Sabonnadière, CEO of the French research institute.

 

Source: Stanford News "A Stanford-led enginering team unveils the prototype for a computer-on-a-chip", February 19, 2019.


 

Related Links

Pages

Subscribe to RSS - research