research

image of professor Tsachy Weissman
September 2019

The Stanford Compression Forum (SCF), recently completed its inaugural summer internship program, alliteratively named – STEM to SHTEM (Science, Humanities, Technology, Engineering and Mathematics). Professor Tsachy Weisman and the Compression Forum hosted 44 high school students for internships that ranged from 5 to 9 weeks this summer. 

"The internship is a great opportunity for students to experience engineering research in a new light. Working in groups, students from all kinds of backgrounds had the chance to not only research exciting questions at the intersection of different fields, but also learn from their peers unique ways to approach these questions," reports internship coordinator and graduate student Cindy Nguyen. "This early exposure to research helps break down barriers to entry for a lot of underrepresented students and will, hopefully, trickle down into their decisions in becoming the next generation of engineers, doctors, and scientists."

Although, the internship was unpaid, it provided exposure to research, transcending traditional disciplinary boundaries. Students were grouped into eleven projects that spanned 9 topic areas. Topic areas included DNA compression, Facial HAAC, Nanopore Technology, Discrete Cube Mathematics, Olfactory in VR, Artificial Olfaction Measurement, Decision Making in Games, Computer Assisted Image Reconstruction, and Audio File Compression.


Additional information about the Stanford Compression Forum: compression.stanford.edu/summer-internships-high-school-students; for inquiries on the 2019 projects and groups: scf_high_school_internship@stanford.edu


Excerpts from 2019 interns:

"I applied to this internship with the intent on working on something related to the genetics field (which I love), and I never expected to learn how to use Python in the process. If it weren't for this internship I probably wouldn't have ever put myself in a situation where I would have to learn how [to] code. I'm happy to say that although it can be challenging at times, I'm extremely grateful for having been given this opportunity to learn about Python and how to use it."

"This internship introduced me to some amazing people and mentors. This project taught me things like advanced programming, communication skills, and developed my interest in computer science and electrical engineering."

"I had a wonderful experience with this internship! My mentor is not only amazing at what he does – but he is also very funny. I enjoy spending time with my group because whenever one of us makes a small discovery, we all get excited."

"This internship has allowed me to learn so much from basic compression to coding with python. I am glad I was able to participate."

Photo: 2019 STEM to SHTEM interns, faculty, and graduate students. Professor Tsachy Weissman, second from right, an internship coordinator and grad student Cindy Nguyen, third from right.

image of EE professor Eric Pop
August 2019

EE Professor Eric Pop's research was recently published in Science Advances.

Research in the Pop Lab has shown that a few layers of 2D materials can provide the same insulation as a sheet of glass 100 times thicker. "Thinner heat shields will enable engineers to make electronic devices even more compact than those we have today. We're looking at the heat in electronic devices in an entirely new way," reports Pop.

Detecting thermal vibrations
Thinking about heat as a form of sound inspired the Pop Lab researchers to borrow some principles from the physical world. "We adapted that idea by creating an insulator that used several layers of atomically thin materials instead of a thick mass of glass," said lead author Sam Vaziri, Electrical Engineering postdoc.

The team used up to four different compounds: graphene, molybdenum diselenide, molybdenum disulfide and tungsten diselenide – each three atoms thick – to create a four-layered insulator just 10 atoms deep. Despite its thinness, the insulator is effective because the atomic heat vibrations are dampened and lose much of their energy as they pass through each layer.

"As engineers, we know quite a lot about how to control electricity, and we're getting better with light, but we're just starting to understand how to manipulate the high-frequency sound that manifests itself as heat at the atomic scale," Pop said.


 

Related Links:

This research was supported by the Stanford Nanofabrication Facility, the Stanford Nano Shared Facilities, the National Science Foundation, the Semiconductor Research Corporation, the Defense Advanced Research Projects Agency, the Air Force Office of Scientific Research, the Stanford SystemX Alliance, the Knut and Alice Wallenberg Foundation, the Stanford Graduate Fellowship program and the National Institute of Standards and Technology. (ANI)

image of published researcher Anastasios Angelopolous, EE BS'19
August 2019

Anastasios Angelopolous (BS '19), et al, recently published a paper titled, "Enhanced Depth Navigation Through Augmented Reality Depth Mapping in Patients with Low Vision." It was published in Nature Research journal Scientific Reports August 2, 2019. The paper describes the use of augmented reality (AR) to assist those diagnosed with retinitis pigmentosa (RP).

After his freshman year, Anastasios started working with USC Professor Mark Humayun, initially focusing on artificial retinal technology. However, in the following two and a half years, their research expanded to explore the possibility of using augmented reality as a way to help people with low vision navigate safely through complex environments.

They combined special glasses and software, which scans an environment, then projects onto the wearer's retina the corresponding obstacles. The team found that the use of their unique AR visual aid reduced collisions by 50% in mobility testing, and by 70% in grasp testing. This striking result is the first to prove clinically that augmented reality can help people with low vision live more independent lives.

Anastasios and team hope that work like this can help people with low vision increase their independence through mobility. They plan to continue their research to include other modalities, such as audio and haptics.

Please join us in congratulating Anastasios and team on the publication of their research work!
This year Anastasios received the Terman Scholastic Achievement Award and completed his BS in Electrical Engineering in an accelerated timeframe.

 

Related Links


Additional Authors:
Dr. & Prof. Hossein Ameri, USC Ophthalmology (bio link)
Dr. & Prof. Mark Humayun, USC Institute for Biomedical Therapeutics (IBT) (bio link)
Dr. & Prof. Debbie Mitra, USC Institute for Biomedical Therapeutics (IBT) (bio link)

Paper Abstract:
Patients diagnosed with Retinitis Pigmentosa (RP) show, in the advanced stage of the disease, severely restricted peripheral vision causing poor mobility and decline in quality of life. This vision loss causes difficulty identifying obstacles and their relative distances. Thus, RP patients use mobility aids such as canes to navigate, especially in dark environments. A number of high-tech visual aids using virtual reality (VR) and sensory substitution have been developed to support or supplant traditional visual aids. These have not achieved widespread use because they are difficult to use or block off residual vision. This paper presents a unique depth to high-contrast pseudocolor mapping overlay developed and tested on a Microsoft Hololens 1 as a low vision aid for RP patients. A single-masked and randomized trial of the AR pseudocolor low vision aid to evaluate real world mobility and near obstacle avoidance was conducted consisting of 10 RP subjects. An FDA-validated functional obstacle course and a custom-made grasping setup were used. The use of the AR visual aid reduced collisions by 50% in mobility testing (p = 0.02), and by 70% in grasp testing (p = 0.03). This paper introduces a new technique, the pseudocolor wireframe, and reports the first significant statistics showing improvements for the population of RP patients with mobility and grasp.

image of Professor Subhasish Mitra
July 2019

In a recent QandA discussion with Stanford Engineering, EE professor Subhasish Mitra and Computer Science professor Clark Barrett, describe their recent work to secure chips before they are manufactured.

What's new when it comes to finding bugs in chips?

Designers have always tried to find logic flaws, or bugs as they are called, before chips went into manufacturing. Otherwise, hackers might exploit these flaws to hijack computers or cause malfunctions. This has been called debugging and it has never been easy. Yet we are now starting to discover a new type of chip vulnerability that is different from so-called bugs. These new weaknesses do not arise from logic flaws. Instead, hackers can figure out how to misuse a feature that has been purposely designed into a chip. There is not a flaw in the logic. But hackers might be able to pervert the logic to steal sensitive data or take over the chip.

How do your algorithms deal with traditional bugs and these new unintended weaknesses?

Let's start with the traditional bugs. We developed a technique called Symbolic Quick Error Detection — or Symbolic QED. Essentially, we use new algorithms to examine chip designs for potential logic flaws or bugs. We recently tested our algorithms on 16 processors that were already being used to help control critical automotive systems like braking and steering. Before these chips went into cars, the designers had already spent five years debugging their own processors using state-of-the-art techniques and fixing all the bugs they found. After using Symbolic QED for one month, we found every bug they'd found in 60 months — and then we found some bugs that were still in the chips. This was a validation of our approach. We think that by using Symbolic QED before a chip goes into manufacturing we'll be able to find and fix more logic flaws in less time.

Does Symbolic QED find all vulnerabilities?

Not in its current incarnation. Through collaboration with other research groups, we have modified Symbolic QED to detect new types of attacks that can come from potential misuse of seemingly innocuous features.

This is just the beginning. The processors we tested were relatively simple. Yet, as we saw, they could be perverted. Over time we will develop more sophisticated algorithms to detect and fix the most sophisticated chips, like the ones responsible for controlling navigation systems on autonomous cars. Our message is simple: As we develop more chips for more critical tasks, we'll need automated systems to find and fix all potential vulnerabilities — traditional bugs and unintended consequences — before chips go into manufacturing. Otherwise we'll always be playing catch up, trying to patch chips after hackers find the vulnerabilities.

Excerpted from "Q&A: What's new in the effort to prevent hackers from hijacking chips?"


 

Related 

 

professor Krishna Shenoy
July 2019

 

Professor Krishna Shenoy's research team has found that using statistical theory to analyze neural activity provides a faster and equally accurate process.

Krishna's team has circumvented today's painstaking process of tracking the activity of individual neurons in favor of decoding neural activity in the aggregate. Each time a neuron fires it sends an electrical signal — known as a "spike" — to the next neuron down the line. It's the sort of intercellular communication that turns a notion in the mind into muscle contraction elsewhere in the body. "Each neuron has its own electrical fingerprint and no two are identical," says Eric Trautmann, a postdoctoral researcher in Krishna's lab and first author of the paper. "We spend a lot of time isolating and studying the activity of individual neurons."

The team believes their work will ultimately lead to neural implants that use simpler electronics to track more neurons than ever before, and also do so more accurately. The key is to combine their sophisticated new sampling algorithms with small electrodes. So far, such small electrodes have only been employed to control simple devices like a computer mouse. But combining this hardware for recording brain signals with the sampling algorithms creates new possibilities. Researchers might be able to deploy a network of small electrodes through larger sections of the brain, and use the algorithms to sample a great many neurons. This could deliver enough accurate brain signal information to control a prosthetic hand capable of fast and precise motions like pitching a baseball or playing the violin.

Better yet, Trautmann said, the new electrodes, coupled with the sampling algorithms, should eventually be able to record brain activity without the many wires needed today to carry signals from the brain to whatever computer controls the prosthesis. Wireless functionality would completely untether users from bulky computers needed to decode neuronal activity today.

Krishna reports, "This study has a bit of a hopeful message in that observing activity in the brain turns out to be easier than we initially expected."

The paper, "Accurate Estimation of Neural Population Dynamics without Spike Sorting" was published in June's issue of Neuron.

Excerpted from Stanford Engineering news

Related Links

July 2019

Professor Gordon Wetzstein and team recently published their findings in Science Advances.

The researchers have created a pair of smart glasses that can automatically focus on what you're looking at. Using eye-trackers and autofocus lenses, the prototype works much like the lens of the eye, with fluid-filled lenses that bulge and thin as the field of vision changes. It also includes eye-tracking sensors that triangulate where a person is looking and determine the precise distance to the object of interest. The team did not invent these lenses or eye-trackers, but they did develop the software system that harnesses this eye-tracking data to keep the fluid-filled lenses in constant and perfect focus.

EE PhD candidate Nitish Padmanaban, said other teams had previously tried to apply autofocus lenses to presbyopia. But without guidance from the eye-tracking hardware and system software, those earlier efforts were no better than wearing traditional progressive lenses.

Gordon's team tested the prototype on 56 people with presbyopia. Test subjects said the autofocus lenses performed better and faster at reading and other tasks. Wearers also tended to prefer the autofocal glasses to the experience of progressive lenses – bulk and weight aside.

Gordon's Computational Imaging Lab is at the forefront of vision systems for VR and AR (virtual and augmented reality). It was in the course of such work that the researchers became aware of the new autofocus lenses and eye-trackers and had the insight to combine these elements to create a potentially transformative product.

Excerpted from Stanford News.

 

Related Links

 


 

 

image of EE and CS professor Dorsa Sadigh
July 2019

Professor Dorsa Sadigh and her lab have combined two different ways of setting goals for robots into a single process, which performed better than either of its parts alone in both simulations and real-world experiments. The researchers presented their findings at the 2019 Robotics: Science & Systems (RSS) Conference.

The team has coined their approach, "DemPref" – DemPref uses both demonstrations and preference queries to learn a reward function. Specifically, "(1) using the demonstrations to learn a coarse prior over the space of reward functions, to reduce the effective size of the space from which queries are generated; and (2) use the demonstrations to ground the (active) query generation process, to improve the quality of the generated queries. Our method alleviates the efficiency issues faced by standard preference-based learning methods and does not exclusively depend on (possibly low-quality) demonstrations," as described in the team's abstract.

The new combination system begins with a person demonstrating a behavior to the robot. That can give autonomous robots a lot of information, but the robot often struggles to determine what parts of the demonstration are important. People also don't always want a robot to behave just like the human that trained it.

"We can't always give demonstrations, and even when we can, we often can't rely on the information people give," said Erdem Biyik, EE PhD candidate, who led the work developing the multiple-question surveys. "For example, previous studies have shown people want autonomous cars to drive less aggressively than they do themselves."

That's where the surveys come in, giving the robot a way of asking, for example, whether the user prefers it move its arm low to the ground or up toward the ceiling. For this study, the group used the slower single question method, but they plan to integrate multiple-question surveys in later work.

In tests, the team found that combining demonstrations and surveys was faster than just specifying preferences and, when compared with demonstrations alone, about 80 percent of people preferred how the robot behaved when trained with the combined system.

 

"This is a step in better understanding what people want or expect from a robot," reports Dorsa. "Our work is making it easier and more efficient for humans to interact and teach robots, and I am excited about taking this work further, particularly in studying how robots and humans might learn from each other."

Excerpted from Stanford News article (link below).

 

Related Links

image of Professor Shan Wang!
April 2019

[Excerpted from Stanford News]

Colorectal cancer is the second leading cause of cancer deaths in the U.S. and a growing problem around the world, but not because it's a particularly difficult cancer to detect and halt. The problem, doctors and researchers believe, is that not enough people are being screened for early signs of the disease, either because they do not know the recommendations or because they are avoiding getting a colonoscopy, which many perceive as an unpleasant procedure.

The current alternatives, said Professor Shan Wang, aren't exactly more pleasant – most of those involve gathering and testing stool samples.

But Shan, his graduate student Jared Nesvet and Uri Ladabaum, a professor of medicine, may have at least a possible solution: a blood test to detect colorectal cancer, which in principle would be less expensive, less invasive and more convenient than colonoscopies and other current tests, the researchers said. Wang and Nesvet have already developed a test that works in the controlled environment of a materials science lab, and now, with help from a Stanford ChEM-H seed grant, the trio are working to validate their approach in the real world of clinical medicine.

[...]

Shan and Nesvet have tested their idea in the lab, and it works well so far, Nesvet said. Now, with help from Ladabaum and the ChEM-H grant, they'll start testing it on blood samples from real patients. Among the questions they'll address are practical ones about how to identify the right people to study, when to draw blood or how to handle the samples.

"That's where we as clinical researchers can help," Ladabaum said.

Shan cautions that a new screen for colon cancer is still a ways off, and that it could involve hundreds, if not thousands, of blood samples before they can be confident their blood test really works. "I expect this will be a five- to 10-year study to bring this technology to fruition," he said.

 

Read full story, "Stanford doctors, materials scientists hope a blood test will encourage more colon cancer screenings."

 


 

Related Links

 

image of professor Eric Pop
April 2019

Professor Eric Pop was featured in a "People Behind the Science" podcast. People Behind the Science's mission is to inspire current and future scientists, share the different paths to a successful career in science, educate the general population on what scientists do, and show the human side of science. In each episode, a different scientist talks about their journey by sharing their successes, failures, and passions.

Excerpts of Eric's conversation follow.
Please visit People Behind the Science for the full episode.


The Scientific Side (timestamp 3:20)

Research in Eric's laboratory spans electronics, electrical engineering, physics, nanomaterials, and energy. They are interested in applying materials with nanoscale properties to engineer better electronics such as transistors, circuits, and data storage mechanisms. Eric is also investigating ways to better manage the heat that electronics generate.

A Dose of Motivation (timestamp 5:17)

Eric is motivated by curiosity and ensuring that the work they do in the lab is useful to people.

Advice For Us All (timestamp 53:40)

Clearly communicating your research is critically important. This includes all forms of communication, whether it is verbal, written, or visual. Before you give a presentation or communicate your work, you should really try to understand your audience. Get a sense of who they are, what they care about, and the best way to convey the cool things you are working on to them. Regardless of what career you choose, being able to share your ideas with people and convince them of the importance of your work will define your career.

 

Related Links

image of Professor Balaji Prabhakar
April 2019

Professors Balaji Prabhakar and Darrell Duffie (GSB) held a moderated conversation about the next generation of finance and high-speed technologies.

Balaji described the accelerating timeframes that gird securities trading infrastructure, where the time from "tick to trade" is now measured in tens of nanoseconds. He also highlighted the potential problems and advantages to be gained by exploiting such lightning-fast speeds. At that nano-scale, it can be hard for networks to properly sequence packets of data being sent over even the faster fiber-optic wires. "If you see a price that's favorable to your trading strategy and you cross the gate ahead of me, then your transactions should happen first," he said. "Unfortunately, in the world where these networks have 'jitters,' this is not easy to guarantee."

The speakers also agreed that one way or another, massive disruption is coming for financial institutions. "There is a mantra that is being repeated on Wall Street, 'We are a tech company that happens to be an investment bank,'" said Balaji. Redefining the role of banks from being consumers of technology to creators of technology will mean that "any bank that's not big enough or not nimble enough is going to lose out," said Duffie.

 

Excerpted from "How is Silicon Valley changing Wall Street?", Stanford Engineering News, April 02, 2019

Watch the conversation in its entirety.

 

Related Links

 

Pages

Subscribe to RSS - research