student

October 2017

A dozen teams of EE students came together Friday afternoon to compete in EE's Annual Pumpkin Carving Contest.

This year's event was hosted in the Packard Atrium, with plenty of candy, refreshments, and music. View photo album.

Judges included student services staff Rachel Pham, graduate student Jerry Shi, and Professor Juan Rivas-Davila. Judging criteria included completeness, technical skill, creativity, and costumes.

In addition to judge's scores, a number of voting ballots were also available for attendees to vote for their favorite pumpkin. They were added to judges totals and counted toward the final result.

Third Place went to the "Eevee Evolution" of David Zeng, Tracey Hong, and Neal Jean.
Second place went to "EE42" Team, whose members are Katherine Kowalski, Amit Kohli, Justin Babauta, and Alec Preciado.
The First place team was "Pumpkin Carving Dream Team" of Nicole Grimwood, Nicolo Maganzini, Tori Fujinami, and Tong Mu.

 

Thanks to all of our staff, faculty, and students for your enthusiastic participation!

October 2017

Ruishan Liu (PhD candidate) has received the Best Poster Award at the Bay Area Machine Learning Symposium, October 19, 2017. Ruishan belongs to the Stanford Laboratory for Machine Learning group, advised by Professor James Zou. Ruishan develops algorithms and theories in machine learning and reinforcement learning. She is also interested in applications in genomics and healthcare.

 

Poster Title:
"The Effects of Memory Replay in Reinforcement Learning"

Poster Abstract:
Experience replay is a key technique behind many recent advances in deep reinforcement learning. Despite its wide-spread application, very little is understood about the properties of experience replay. How does the amount of memory kept affect learning dynamics? Does it help to prioritize certain experiences?

In our work, we address these questions by formulating a dynamical systems ODE model of Q-learning with experience replay. We derive analytic solutions of the ODE for a simple setting. We show that even in this very simple setting, the amount of memory kept can substantially affect the agent's performance. Too much or too little memory both slow down learning.

We also proposed a simple algorithm for adaptively changing the memory buffer size which achieves consistently good empirical performance.

 

Congratulations to Ruishan!

July 2017

Kirby Smithe (PhD candidate) recieved first place for his presentation, "High-field transport and velocity saturation in CVD monolayer MoS2" at the EDISON 20 Conference in July.

All student preesenters were ranked by a committee comprised of members of the International Advisory Committee. More than 25 presentations and posters were evaluated by this committee. Kirby's award is accompanied by $300 and a glass commemorative trophy.

 

Kirby's research involves growth and material characterization of 2D semiconductors and engineering 2D electronic devices for circuit-level applications. He is the recipient of the Stanford Graduate Fellowship as well as the NSF Graduate Fellowship. Kirby is part of the Pop Lab research group, advised by Professor Eric Pop.

 

Congratulations to Kirby!

 

 

July 2017

PhD candidates Alex Gabourie and Saurabh Suryavanshi received Best Paper Award at the 17th IEEE International Conference on Nanotechnology (IEEE NANO 2017). Their paper is titled, "Thermal Boundary Conductance of the MoS2-SiO2 Interface."

The awards candidates were nominated by program committee together with award committee based on the rating of the abstract. The awards winners were selected from the candidates by the award committee based on both the recommendation of excellent final papers by track chairs and the rating of the overall quality of the final paper and the presentation by session chairs and invited speakers.

Saurabh and Alex are part of the Pop Lab.

Congratulations Alex & Saurabh! 

 

 

The paper's authors are Saurabh Vinayak Suryavanshi, Alexander Joseph Gabourie, Amir Barati Farimani, Eilam Yalon and Eric Pop.

 2017.ieeenano.org

October 2017

Congratulations to PhD candidates Connor McClellan and Fiona Ching-Hua Wang. Each received the best in session award at the TechCon 2017, held in Austin, Texas. 

  • Connor's paper, "Effective n-type Doping of Monolayer MoS2 by AlO(x)" was presented in the 2-D and TMD Materials and Devices: I session. Professor Eric Pop is Connor's advisor

  • Fiona's paper, "N-type Black Phosphorus Transistor with Low Work Function Contacts," was presented in the 2-D and TMD Materials and Devices: III session. Professor H.-S. Philip Wong is Fiona's advisor.

They were presented with a certificate and medal during the final event for SRC TECHCON 2017.

 

 

Kai Zang (PhD '17)
October 2017

Kai Zang's (PhD '17) paper published in Nature Communications describes how nanotextured silicon can absorb more photons, furthering the effectiveness of solar cells. This research also resulted in a second discovery – improving the collision-avoidance technology in vehicles.

Professor Jim Harris said he always thought Zang's texturing technique was a good way to improve solar cells. "But the huge ramp up in autonomous vehicles and LIDAR suddenly made this 100 times more important," he says.

The researchers figured out how to create a very thin layer of silicon that could absorb as many photons as a much thicker layer of the costly material. Specifically, rather than laying the silicon flat, they nanotextured the surface of the silicon in a way that created more opportunities for light particles to be absorbed. Their technique increased photon absorption rates for the nanotextured solar cells compared to traditional thin silicon cells, making more cost-effective use of the material.

After the researchers shared these efficiency figures, engineers working on autonomous vehicles began asking whether this texturing technique could help them get more accurate results from a collision-avoidance technology called LIDAR, which is conceptually like sonar except that it uses light rather than sound waves to detect objects in the car's travel path.

In their Nature Communications paper, the team reports that their textured silicon can capture as many as three to six times more of the returning photons than today's LIDAR receivers. They believe this will enable self-driving car engineers to design high-performance, next-generation LIDAR systems that would continuously send out a single laser pulse in all directions. The reflected photons would be captured by an array of textured silicon detectors, creating moment-to-moment maps of pedestrian-filled city crosswalks.

Harris said the texturing technology could also help to solve two other LIDAR snags unique to self-driving cars – potential distortions caused by heat and the machine equivalent of peripheral vision. The Harris Group research website. 

 

 

Excerpted from "A new way to improve solar cells can also benefit self-driving cars," Stanford Engineering, October 2, 2017.

September 2017

During Spring quarter, lab64's expertise and tools were called into action for an unusual objective – Palo Alto Code:ART Festival 2017! Mateo Garcia, an undergraduate majoring in computer science and art practice utilized lab64 to help him realize his embedded systems art installation.

'Feng Shui : Flow of Energy' was created by Mateo Garcia (B.A.S.'18). His installation was on display at 455 Bryant Street, incorporating three levels of the parking garage stairway. Strands of LED lights were programmatically controlled, responding to various inputs and commands. Mateo's installation represented the flow of light energy from the sun to the earth. The Code:ART installations were on display for one weekend, throughout downtown Palo Alto. City of Palo Alto Code Art 

Maker lab64, housed in the Packard Building is available 24/7 for Stanford students. Embedded systems projects like Mateo's are encouraged and supported by lab64.

"This project would not have been possible without the dedicated time and energy of Steven Clark, who advised me on the design and engineering of this work," states Mateo. Steven Clark is the Instructional Labs Manager. Information about EE's lab64 can be found under EE's student resources, Maker lab64

 

David Hallac, EE PhD candidate
September 2017

David Hallac, EE PhD candidate, is the lead author of "Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series Data," which has been selected to receive the KDD 2017 Conference Best Paper Runner-Up Award and the Best Student Paper runner-up Award. Co-authors include research assistant Sagar Vare (CS), professor Stephen Boyd (EE) and professor Jure Leskovec (CS).

ACM SIGKDD is the Association for Computing Machinery Special Interest Group on Knowledge Discovery and Data Mining. The award recognizes papers presented at the annual SIGKDD conference, KDD2017, that advance the fundamental understanding of the field of knowledge discovery in data and data mining.

Their paper will received both the KDD 2017 Best Paper runner-up Award, as well as the Best Student Paper runner-up Award at the KDD 2017 ceremonies held in Halifax, Canada in August. The group will receive individual award plaques as well as a check.

 

Congratulations to David, Sagar, Stephen and Jure on this special recognition!

 

 

 

View "Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series Data" Abstract.

Orly Liba (PhD candidate ’18)
July 2017

Orly Liba (PhD candidate '18) is the lead author of a study published in Nature Communications. Her advisor, Professor Adam de la Zerda and fellow researchers have devised a way to improve the quality of images obtained through optical coherence tomography (OCT).

The relatively simple, low-cost fix — entailing a pair of lenses, a piece of ground glass and some software tweaks — erases blemishes that have bedeviled images obtained via OCT since its invention in 1991. This improvement, combined with the technology's ability to optically penetrate up to 2 millimeters into tissue, could enable physicians to perform "virtual biopsies," visualizing tissue in three dimensions at microscope-quality resolution without excising any tissue from patients.

Their study describes how the researchers tested the enhancement in two different commercially available OCT devices. They were able to view cell-scale features in intact tissues, including in a mouse's ear, retina and cornea, as well as Meissner's corpuscle, found in the skin of a human fingertip.

"We saw sebaceous glands, hair follicles, blood vessels, lymph vessels and more," Liba said.

Other Stanford co-authors of the study are former postdoctoral scholars Matthew Lew, PhD, and Debasish Sen, PhD; graduate student Elliott SoRelle; research assistant Rebecca Dutta; professor of ophthalmology Darius Moshfeghi, MD; and professor of physics and of molecular and cellular physiology Steven Chu, PhD.

 

 

Excerpted from "Scientists turbocharge high-resolution, 3-D imaging," published on Stanford Medicine's News Center, June 20, 2017

Professor Gordon Wetzstein, left; postdoctoral research fellow Donald Dansereau (Image credit: L.A. Cicero)
July 2017

A new 4D camera designed by Professor Gordon Wetzstein and postdoc Dr. Donald Dansereau captures light field information over a 138° field of view.

The difference between looking through a normal camera and the new design is like the difference between looking through a peephole and a window, the scientists said.

"A 2D photo is like a peephole because you can't move your head around to gain more information about depth, translucency or light scattering," Dansereau said. "Looking through a window, you can move and, as a result, identify features like shape, transparency and shininess."

That additional information comes from a type of photography called light field photography, first described in 1996 by EE Professors Marc Levoy and Pat Hanrahan. Light field photography captures the same image as a conventional 2D camera plus information about the direction and distance of the light hitting the lens, creating what's known as a 4D image. A well-known feature of light field photography is that it allows users to refocus images after they are taken because the images include information about the light position and direction. Robots might use this to see through rain and other things that could obscure their vision.

The extremely wide field of view, which encompasses nearly a third of the circle around the camera, comes from a specially designed spherical lens. However, this lens also produced a significant hurdle: how to translate a spherical image onto a flat sensor. Previous approaches to solving this problem had been heavy and error prone, but combining the optics and fabrication expertise of UCSD and the signal processing and algorithmic expertise of Wetzstein's lab resulted in a digital solution to this problem that not only leads to the creation of these extra-wide images but enhances them.

This camera system's wide field of view, detailed depth information and potential compact size are all desirable features for imaging systems incorporated in wearables, robotics, autonomous vehicles and augmented and virtual reality.

"Many research groups are looking at what we can do with light fields but no one has great cameras. We have off-the-shelf cameras that are designed for consumer photography," said Dansereau. "This is the first example I know of a light field camera built specifically for robotics and augmented reality. I'm stoked to put it into peoples' hands and to see what they can do with it."

 

Two 138° light field panoramas and a depth estimate of the second panorama. (Image credit: Stanford Computational Imaging Lab and Photonic Systems Integration Laboratory at UC San Diego)

 


Read more at Professor Wetztein's research site, Stanford Computational Imaging Lab.

Excerpted from Stanford News, "New camera designed by Stanford researchers could improve robot vision and virtual reality," July 21, 2017.

Pages

Subscribe to RSS - student