student

October 2017

A dozen teams of EE students came together Friday afternoon to compete in EE's Annual Pumpkin Carving Contest.

This year's event was hosted in the Packard Atrium, with plenty of candy, refreshments, and music. View photo album.

Judges included student services staff Rachel Pham, graduate student Jerry Shi, and Professor Juan Rivas-Davila. Judging criteria included completeness, technical skill, creativity, and costumes.

In addition to judge's scores, a number of voting ballots were also available for attendees to vote for their favorite pumpkin. They were added to judges totals and counted toward the final result.

Third Place went to the "Eevee Evolution" of David Zeng, Tracey Hong, and Neal Jean.
Second place went to "EE42" Team, whose members are Katherine Kowalski, Amit Kohli, Justin Babauta, and Alec Preciado.
The First place team was "Pumpkin Carving Dream Team" of Nicole Grimwood, Nicolo Maganzini, Tori Fujinami, and Tong Mu.

 

Thanks to all of our staff, faculty, and students for your enthusiastic participation!

PhD candidate Nir Even-Chen
November 2017

PhD candidate Nir Even-Chen and his advisor, professor Krishna Shenoy, et al., share recent strides in brain-machine interface (BMI) innovation. BMIs are devices that record neural activity from the user's brain and translate it into movement of prosthetic devices. BMIs enable people with motor impairment, e.g. a spinal cord injury, to control and move prosthetic devices with their minds. They can control robotic arms for improving their independence or a computer cursor for typing and browsing the web. Even-Chen, et al's, recently published paper, "Augmenting intracortical brain-machine interface with neurally driven error detectors," describes a new system that reads users minds, detects when the user perceives a mistake, and intervenes with a corrective action. The new system allows users to control BMIs more easily, smoothly, and efficiently.

While most BMI studies focus on designing better techniques to infer the user's movement intention, Even-Chen, et al, improved the BMI performance by taking a very different approach, detecting and undoing mistakes. Their work presents both novel fundamental science and implementation of their idea. They showed for the first time that it is possible to detect key-selection errors from the motor cortex — a brain area mainly involved in movement control. Then, they used the data in real-time to undo—or even prevent—mistakes.

The need for real-time error correction

In our daily life, we all make mistakes, from typos during texting, clicking the wrong link on a web page, or knocking our cup of coffee over while reaching for the cake. Correcting these mistakes might be time-consuming, and annoying—especially when they occur frequently during challenging tasks. Imagine a system that could detect – or predict – your mistakes (e.g., typos) and automatically undo, or even prevent them from happening. This can save the time of manually correcting the mistake, especially when the errors are frequent and the actions to correct them slow you down. Error detection is not always trivial, in some cases only the person who made the mistake knows what she intended. Thus, such an error detection system needs to infer one's intention, i.e., read her mind. An automatic error detection system is most effective when the task is challenging or when our skill is limited, and errors are common. A good candidate for testing such an error detection approach is a BMI system. First, BMIs enable a readout of the user's mind. And second, it can be highly beneficial for BMI users, since BMI control is challenging and prone to errors.

Intracortical BMIs, which records neural activity directly from the brain, showed a promising result in pilot clinical trials and are the highest-performing BMI systems to date. This makes them prime candidates for serving as an assistive technology for people with paralysis. Although the performance of intracortical BMI systems has markedly improved in the last two decades, errors — such as selecting the wrong key during typing — still occur and their performance is far from able-bodied performance. 

Previously it was unknown if errors can be detected from the same brain region traditionally used for decoding BMI user's movement intention—the motor cortex. In their work, Even-Chen and colleagues found that when errors occur a characteristic brain activity can be observed. That brain activity pattern enables them to detect mistakes with high accuracy shortly after and even before they occurred.

This finding encouraged them to develop and implement first-of-its-kind error "detect-and-act" system. This system reads the user's mind, detects when the user thinks an error occurred, and can automatically "undo" or "prevent" them. The detect-and-act system works independently and in parallel to a traditional movement BMI that estimate user's movement intention (see figure). In a challenging BMI task that resulted in substantial errors, this approach improved the performance of a BMI. Using the detect-and-act system, hard tasks will have fewer errors and become easier, the use of a BMI will become smoother, and be less frustrating.

A detect-and-act system can potentially be used to improve how fast people with paralysis can type or control a robotic arm using a BMI. For example, automatically correcting a mistake when they type, or stopping the movement of a robotic arm when they are about to knock over their coffee. While this work has been done in pre-clinical trial with monkeys, Even-Chen and colleagues also presented encouraging preliminary results of a clinical trial (BrainGate2) at a conference, and showed the potential translation to humans.

 

Read more: Journal of Neural Engineering, "Augmenting intracortical brain-machine interface with neurally driven error detectors."
Additional authors include Sergey Stavisky, Jonathan Kao, Stephen Ryu, and Krishna Shenoy. 

 

October 2017

Ruishan Liu (PhD candidate) has received the Best Poster Award at the Bay Area Machine Learning Symposium, October 19, 2017. Ruishan belongs to the Stanford Laboratory for Machine Learning group, advised by Professor James Zou. Ruishan develops algorithms and theories in machine learning and reinforcement learning. She is also interested in applications in genomics and healthcare.

 

Poster Title:
"The Effects of Memory Replay in Reinforcement Learning"

Poster Abstract:
Experience replay is a key technique behind many recent advances in deep reinforcement learning. Despite its wide-spread application, very little is understood about the properties of experience replay. How does the amount of memory kept affect learning dynamics? Does it help to prioritize certain experiences?

In our work, we address these questions by formulating a dynamical systems ODE model of Q-learning with experience replay. We derive analytic solutions of the ODE for a simple setting. We show that even in this very simple setting, the amount of memory kept can substantially affect the agent's performance. Too much or too little memory both slow down learning.

We also proposed a simple algorithm for adaptively changing the memory buffer size which achieves consistently good empirical performance.

 

Congratulations to Ruishan!

July 2017

Kirby Smithe (PhD candidate) received first place for his presentation, "High-field transport and velocity saturation in CVD monolayer MoS2" at the EDISON 20 Conference in July.

All student preesenters were ranked by a committee comprised of members of the International Advisory Committee. More than 25 presentations and posters were evaluated by this committee. Kirby's award is accompanied by $300 and a glass commemorative trophy.

 

Kirby's research involves growth and material characterization of 2D semiconductors and engineering 2D electronic devices for circuit-level applications. He is the recipient of the Stanford Graduate Fellowship as well as the NSF Graduate Fellowship. Kirby is part of the Pop Lab research group, advised by Professor Eric Pop.

 

Congratulations to Kirby!

 

 

July 2017

PhD candidates Alex Gabourie and Saurabh Suryavanshi received Best Paper Award at the 17th IEEE International Conference on Nanotechnology (IEEE NANO 2017). Their paper is titled, "Thermal Boundary Conductance of the MoS2-SiO2 Interface."

The awards candidates were nominated by program committee together with award committee based on the rating of the abstract. The awards winners were selected from the candidates by the award committee based on both the recommendation of excellent final papers by track chairs and the rating of the overall quality of the final paper and the presentation by session chairs and invited speakers.

Saurabh and Alex are part of the Pop Lab.

Congratulations Alex & Saurabh! 

 

 

The paper's authors are Saurabh Vinayak Suryavanshi, Alexander Joseph Gabourie, Amir Barati Farimani, Eilam Yalon and Eric Pop.

 2017.ieeenano.org

October 2017

Congratulations to PhD candidates Connor McClellan and Fiona Ching-Hua Wang. Each received the best in session award at the TechCon 2017, held in Austin, Texas. 

  • Connor's paper, "Effective n-type Doping of Monolayer MoS2 by AlO(x)" was presented in the 2-D and TMD Materials and Devices: I session. Professor Eric Pop is Connor's advisor

  • Fiona's paper, "N-type Black Phosphorus Transistor with Low Work Function Contacts," was presented in the 2-D and TMD Materials and Devices: III session. Professor H.-S. Philip Wong is Fiona's advisor.

They were presented with a certificate and medal during the final event for SRC TECHCON 2017.

 

 

Kai Zang (PhD '17)
October 2017

Kai Zang's (PhD '17) paper published in Nature Communications describes how nanotextured silicon can absorb more photons, furthering the effectiveness of solar cells. This research also resulted in a second discovery – improving the collision-avoidance technology in vehicles.

Professor Jim Harris said he always thought Zang's texturing technique was a good way to improve solar cells. "But the huge ramp up in autonomous vehicles and LIDAR suddenly made this 100 times more important," he says.

The researchers figured out how to create a very thin layer of silicon that could absorb as many photons as a much thicker layer of the costly material. Specifically, rather than laying the silicon flat, they nanotextured the surface of the silicon in a way that created more opportunities for light particles to be absorbed. Their technique increased photon absorption rates for the nanotextured solar cells compared to traditional thin silicon cells, making more cost-effective use of the material.

After the researchers shared these efficiency figures, engineers working on autonomous vehicles began asking whether this texturing technique could help them get more accurate results from a collision-avoidance technology called LIDAR, which is conceptually like sonar except that it uses light rather than sound waves to detect objects in the car's travel path.

In their Nature Communications paper, the team reports that their textured silicon can capture as many as three to six times more of the returning photons than today's LIDAR receivers. They believe this will enable self-driving car engineers to design high-performance, next-generation LIDAR systems that would continuously send out a single laser pulse in all directions. The reflected photons would be captured by an array of textured silicon detectors, creating moment-to-moment maps of pedestrian-filled city crosswalks.

Harris said the texturing technology could also help to solve two other LIDAR snags unique to self-driving cars – potential distortions caused by heat and the machine equivalent of peripheral vision. The Harris Group research website. 

 

 

Excerpted from "A new way to improve solar cells can also benefit self-driving cars," Stanford Engineering, October 2, 2017.

September 2017

During Spring quarter, lab64's expertise and tools were called into action for an unusual objective – Palo Alto Code:ART Festival 2017! Mateo Garcia, an undergraduate majoring in computer science and art practice utilized lab64 to help him realize his embedded systems art installation.

'Feng Shui : Flow of Energy' was created by Mateo Garcia (B.A.S.'18). His installation was on display at 455 Bryant Street, incorporating three levels of the parking garage stairway. Strands of LED lights were programmatically controlled, responding to various inputs and commands. Mateo's installation represented the flow of light energy from the sun to the earth. The Code:ART installations were on display for one weekend, throughout downtown Palo Alto. City of Palo Alto Code Art 

Maker lab64, housed in the Packard Building is available 24/7 for Stanford students. Embedded systems projects like Mateo's are encouraged and supported by lab64.

"This project would not have been possible without the dedicated time and energy of Steven Clark, who advised me on the design and engineering of this work," states Mateo. Steven Clark is the Instructional Labs Manager. Information about EE's lab64 can be found under EE's student resources, Maker lab64

 

David Hallac, EE PhD candidate
September 2017

David Hallac, EE PhD candidate, is the lead author of "Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series Data," which has been selected to receive the KDD 2017 Conference Best Paper Runner-Up Award and the Best Student Paper runner-up Award. Co-authors include research assistant Sagar Vare (CS), professor Stephen Boyd (EE) and professor Jure Leskovec (CS).

ACM SIGKDD is the Association for Computing Machinery Special Interest Group on Knowledge Discovery and Data Mining. The award recognizes papers presented at the annual SIGKDD conference, KDD2017, that advance the fundamental understanding of the field of knowledge discovery in data and data mining.

Their paper will received both the KDD 2017 Best Paper runner-up Award, as well as the Best Student Paper runner-up Award at the KDD 2017 ceremonies held in Halifax, Canada in August. The group will receive individual award plaques as well as a check.

 

Congratulations to David, Sagar, Stephen and Jure on this special recognition!

 

 

 

View "Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series Data" Abstract.

Orly Liba (PhD candidate ’18)
July 2017

Orly Liba (PhD candidate '18) is the lead author of a study published in Nature Communications. Her advisor, Professor Adam de la Zerda and fellow researchers have devised a way to improve the quality of images obtained through optical coherence tomography (OCT).

The relatively simple, low-cost fix — entailing a pair of lenses, a piece of ground glass and some software tweaks — erases blemishes that have bedeviled images obtained via OCT since its invention in 1991. This improvement, combined with the technology's ability to optically penetrate up to 2 millimeters into tissue, could enable physicians to perform "virtual biopsies," visualizing tissue in three dimensions at microscope-quality resolution without excising any tissue from patients.

Their study describes how the researchers tested the enhancement in two different commercially available OCT devices. They were able to view cell-scale features in intact tissues, including in a mouse's ear, retina and cornea, as well as Meissner's corpuscle, found in the skin of a human fingertip.

"We saw sebaceous glands, hair follicles, blood vessels, lymph vessels and more," Liba said.

Other Stanford co-authors of the study are former postdoctoral scholars Matthew Lew, PhD, and Debasish Sen, PhD; graduate student Elliott SoRelle; research assistant Rebecca Dutta; professor of ophthalmology Darius Moshfeghi, MD; and professor of physics and of molecular and cellular physiology Steven Chu, PhD.

 

 

Excerpted from "Scientists turbocharge high-resolution, 3-D imaging," published on Stanford Medicine's News Center, June 20, 2017

Pages

Subscribe to RSS - student