student

David Hallac, EE PhD candidate
September 2017

David Hallac, EE PhD candidate, is the lead author of "Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series Data," which has been selected to receive the KDD 2017 Conference Best Paper Runner-Up Award and the Best Student Paper runner-up Award. Co-authors include research assistant Sagar Vare (CS), professor Stephen Boyd (EE) and professor Jure Leskovec (CS).

ACM SIGKDD is the Association for Computing Machinery Special Interest Group on Knowledge Discovery and Data Mining. The award recognizes papers presented at the annual SIGKDD conference, KDD2017, that advance the fundamental understanding of the field of knowledge discovery in data and data mining.

Their paper will received both the KDD 2017 Best Paper runner-up Award, as well as the Best Student Paper runner-up Award at the KDD 2017 ceremonies held in Halifax, Canada in August. The group will receive individual award plaques as well as a check.

 

Congratulations to David, Sagar, Stephen and Jure on this special recognition!

 

 

 

View "Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series Data" Abstract.

Orly Liba (PhD candidate ’18)
July 2017

Orly Liba (PhD candidate '18) is the lead author of a study published in Nature Communications. Her advisor, Professor Adam de la Zerda and fellow researchers have devised a way to improve the quality of images obtained through optical coherence tomography (OCT).

The relatively simple, low-cost fix — entailing a pair of lenses, a piece of ground glass and some software tweaks — erases blemishes that have bedeviled images obtained via OCT since its invention in 1991. This improvement, combined with the technology's ability to optically penetrate up to 2 millimeters into tissue, could enable physicians to perform "virtual biopsies," visualizing tissue in three dimensions at microscope-quality resolution without excising any tissue from patients.

Their study describes how the researchers tested the enhancement in two different commercially available OCT devices. They were able to view cell-scale features in intact tissues, including in a mouse's ear, retina and cornea, as well as Meissner's corpuscle, found in the skin of a human fingertip.

"We saw sebaceous glands, hair follicles, blood vessels, lymph vessels and more," Liba said.

Other Stanford co-authors of the study are former postdoctoral scholars Matthew Lew, PhD, and Debasish Sen, PhD; graduate student Elliott SoRelle; research assistant Rebecca Dutta; professor of ophthalmology Darius Moshfeghi, MD; and professor of physics and of molecular and cellular physiology Steven Chu, PhD.

 

 

Excerpted from "Scientists turbocharge high-resolution, 3-D imaging," published on Stanford Medicine's News Center, June 20, 2017

Professor Gordon Wetzstein, left; postdoctoral research fellow Donald Dansereau (Image credit: L.A. Cicero)
July 2017

A new 4D camera designed by Professor Gordon Wetzstein and postdoc Dr. Donald Dansereau captures light field information over a 138° field of view.

The difference between looking through a normal camera and the new design is like the difference between looking through a peephole and a window, the scientists said.

"A 2D photo is like a peephole because you can't move your head around to gain more information about depth, translucency or light scattering," Dansereau said. "Looking through a window, you can move and, as a result, identify features like shape, transparency and shininess."

That additional information comes from a type of photography called light field photography, first described in 1996 by EE Professors Marc Levoy and Pat Hanrahan. Light field photography captures the same image as a conventional 2D camera plus information about the direction and distance of the light hitting the lens, creating what's known as a 4D image. A well-known feature of light field photography is that it allows users to refocus images after they are taken because the images include information about the light position and direction. Robots might use this to see through rain and other things that could obscure their vision.

The extremely wide field of view, which encompasses nearly a third of the circle around the camera, comes from a specially designed spherical lens. However, this lens also produced a significant hurdle: how to translate a spherical image onto a flat sensor. Previous approaches to solving this problem had been heavy and error prone, but combining the optics and fabrication expertise of UCSD and the signal processing and algorithmic expertise of Wetzstein's lab resulted in a digital solution to this problem that not only leads to the creation of these extra-wide images but enhances them.

This camera system's wide field of view, detailed depth information and potential compact size are all desirable features for imaging systems incorporated in wearables, robotics, autonomous vehicles and augmented and virtual reality.

"Many research groups are looking at what we can do with light fields but no one has great cameras. We have off-the-shelf cameras that are designed for consumer photography," said Dansereau. "This is the first example I know of a light field camera built specifically for robotics and augmented reality. I'm stoked to put it into peoples' hands and to see what they can do with it."

 

Two 138° light field panoramas and a depth estimate of the second panorama. (Image credit: Stanford Computational Imaging Lab and Photonic Systems Integration Laboratory at UC San Diego)

 


Read more at Professor Wetztein's research site, Stanford Computational Imaging Lab.

Excerpted from Stanford News, "New camera designed by Stanford researchers could improve robot vision and virtual reality," July 21, 2017.

Yuanfang Li and Dr. Ardavan Pedram: Best Paper Award, IEEE ASAP
July 2017

Co-authors Yuanfang Li (MS candidate) and Dr. Ardavan Pedram received the Best Paper Award at the 28th annual IEEE International Conference on Application-specific Systems, Architectures and Processors (ASAP).

The conference covers the theory and practice of application-specific systems, architectures and processors – specifically building upon traditional strengths in areas such as computer arithmetic, cryptography, compression, signal and image processing, network processing, reconfigurable computing, application-specific instruction-set processors, and hardware accelerators.

Yuanfang Li is an M.S. candidate and Dr. Ardavan Pedram is a senior research associate who manages the PRISM Project. The PRISM project enables the design of reconfigurable architectures to accelerate the building blocks of machine learning, high performance computing, and data science routines.

 

Congratulations to Yuanfang and Ardavan for their well-deserved award!

 

Abstract "CATERPILLAR: Coarse Grain Reconfigurable Architecture for Accelerating the Training of Deep Neural Networks":
Accelerating the inference of a trained DNN is a well studied subject. In this paper we switch the focus to the training of DNNs. The training phase is compute intensive, demands complicated data communication, and contains multiple levels of data dependencies and parallelism. This paper presents an algorithm/architecture space exploration of efficient accelerators to achieve better network convergence rates and higher energy efficiency for training DNNs. We further demonstrate that an architecture with hierarchical support for collective communication semantics provides flexibility in training various networks performing both stochastic and batched gradient descent based techniques. Our results suggest that smaller networks favor non-batched techniques while performance for larger networks is higher using batched operations.

June 2017

Congratulations to Dianmin Lin (PhD '16), she has been awarded the 2017 QEP Doctoral Research Prize, jointly with Dr. Jamie Francis-Jones (University of Bath).

The QEP Doctoral Research Prize recognizes students who have conducted work of an exceptional standard in the field of quantum electronics and photonics. The winning student receives an award of £250 and a certificate.

Dr. Dianmin Lin is recognized for the design and demonstration of all-dielectric (silicon) phase-gradient metasurface optical elements, such as axicons, flat lenses and blazed gratings, operating in transmission mode at visible wavelengths, as well as multifunctional metasurfaces providing new or combined functions that are difficult if not impossible to achieve with conventional optical components. Her research has been published in Advanced Materials, Nano Letters, and Science. Three patent applications have been filed for her work at Stanford, one patent has been issued, and two are pending.

Dianmin is currently a senior optical scientist working on augmented reality.

 

Congratulations to Dianmin on her well-deserved recognition and award!

The Brongersma Group

Pictured, The Brongersma Group is concerned with the development and understanding of nanophotonic devices. As part of a worldwide research and development effort on 'metamaterials' - manmade media that possess unique properties not found in nature, students in the group aim to nanostructure the layered materials in conventional optoelectronic devices so as to increase their performance or to achieve entirely new functions. They have successfully applied this approach to the fields of solar energy production, information technology, and optical imaging.

 

 

 

 

Excerpted from IOP's 'QEP Group Prize.'

June 2017

By Julie Chang, PhD candidate

The seventh annual IEEE International Conference on Computational Photography (ICCP) was hosted at Stanford University on May 12-14, 2017. Over 200 students, post-docs, professors, and entrepreneurs from around the world came together to discuss their research in this area. Professor Gordon Wetzstein from Stanford served as program chair alongside Laura Waller from UC Berkeley and Clem Karl from Boston University.

Wetzstein leads the Computational Imaging group at Stanford, which works on advancing the capabilities of camera and display technology through interdisciplinary research in applied math, optics, human perception, computing, and electronics. Active areas of research include virtual reality displays, advanced imaging systems, and optimization-based image processing. Wetzstein also teaches the popular Virtual Reality course (EE 267) as well as Computational Imaging and Displays (EE 367) and Digital Image Processing (EE 368). Several members of Wetzstein's lab presented their work at the conference. Isaac Kauvar (co-advised by Karl Deisseroth) and Julie Chang's paper on "Aperture interference and the volumetric resolution of light field fluorescence microscopy" was accepted for a talk. Posters and demos from Wetzstein's group included Nitish Padmanaban's provocatively titled project on "Making Virtual Reality Better Than Reality," Robert Konrad's spinning VR camera nicknamed "Vortex", and Felix Heide's domain-specific language "ProxImaL" for efficient image optimization.

ICCP 2017 was comprised of nine presentation sessions each with several accepted and invited talks organized around topics such as time-of-flight and computational illumination, image processing and optimization, computational microscopy, and turbulence and coherence. There was a mix of hardware and software projects for a wide variety of applications, ranging from gigapixel videos to seeing in the dark to photographic stenography. One keynote speaker was scheduled for each day. In Friday's keynote, Professor Karl Deisseroth (Stanford) discussed the importance of optical tools, namely optogenetics and advanced fluorescence microscopy, to help elucidate the inner working of the brain. The second keynote was given by Paul Debevec (USC/Google), who showed some of his team's work in computational relighting, both in Hollywood to make movies such as 'Gravity' possible, and in the White House to construct Barack Obama's presidential bust. The final keynote speaker was Professor Sabine Susstrunk (EPFL), who spoke on the non-depth-measurement uses of near-infrared imaging in computational photography.

The conference this year also included an industry panel on computational photography start-ups comprised of seasoned experts Rajiv Laroia of Light, Ren Ng of Lytro, Jingyi Yu, and Kartik Venkataraman of Pelican Imaging. Kari Pulli of Meta chaired a lively discussion covering the risks and thrills of startups, comparison with working at large companies, and the future of the computational photography industry.

The best paper award was received by Christian Reinbacher, Gottfried Munda, and Thomas Pock for their work on real-time panoramic tracking for event cameras. By popular vote, the best poster award was presented to Katie Bouman et al., for their work on "Turning Corners into Cameras," a method of seeing around corners by looking at the shadows produced at a wall corner, and the best demo award to Grace Kuo et al, for "DiffuserCam," which allows for imaging with a diffuser in place of a lens.

 

May 2017

A research team led by EE professor Jelena Vuckovic, has spent the past several years working toward the development of nanoscale lasers and quantum technologies that might someday enable conventional computers to communicate faster and more securely using light instead of electricity. Vuckovic and her team, including Kevin Fischer, a doctoral candidate and lead author of a paper describing the project, believe that a modified nanoscale laser can be used to efficiently generate quantum light for fully protected quantum communication. "Quantum networks have the potential for secure end-to-end communication wherein the information channel is secured by the laws of quantum physics," states PhD candidate Kevin Fischer.

Signal processing is helping the IoT and other network technologies to operate faster, more efficiently, and very reliably. Advanced research also promises to open new opportunities in key areas, such as highly secure communication and various types of wireless networks.

The biggest challenge the researchers have faced so far is dealing with the fact that quantum light is far weaker than the rest of the light emitted by a modified laser, making it difficult to detect. Addressing this obstacle, the team developed a method to filter out the unwanted light, enabling the quantum signal to be read much better. "Some of the light coming back from the modified laser is like noise, preventing us from seeing the quantum light," Fischer says. "We canceled it out to reveal and emphasize the quantum signal hidden beneath."

Despite being a promising demonstration of revealing the quantum light, the technique is not yet ready for large-scale deployment. The Vuckovic group is working on scaling the technique for reliable application in a quantum network.

 

Excerpted from "A Networking Revolution Powered by Signal Processing," IEEE Signal Processing Magazine, January 2017.
Read full article (opens PDF)

April 2017

The Frederick Emmons Terman Engineering Award for Scholastic Achievement has been awarded to six EE undergraduates. The Terman Award is one of the most selective academic awards. It is based on overall academic performance and is presented to the top five percent of each year's School of Engineering seniors. The 2016-2017 Terman Scholars include six undergraduate seniors from Electrical Engineering.

Terman scholars are invited to attend a celebratory luncheon and encouraged to invite the most influential secondary school or other pre-college teacher who guided them during the formative stages of their academic career.  

Congratulations to all of the Terman Award recipients.

The Electrical Engineering seniors are:

• Darren Hau

• Min Cheol Kim

• Chayakorn Pongsiri

• Peter Franklin Satterthwaite

• Nick John Sovich

• Vivian Wang

The award is named after Fred Terman who was the fourth Dean of the School of Engineering at Stanford, serving from 1944-1958, after which he became the Provost at the University, and is generally credited, along with President Wally Sterling, as having started the process that has led Stanford to its present position among the leading universities of the world.

 

Pictured above are the 2017 Terman Award recipients with their most influencial teacher. (Image credit: Stanford School of Engineering)

 

Learn more about Frederick E. Terman on the EE History timeline. 

Terman image credit, Stanford Historical Photograph Collection.

 

Undergrad Vivian Wang (BS '17) is a 2017 Churchill Scholarship winner
March 2017

Congratulations to Vivian Wang (BS '17) on her well-deserved award!

As an undergraduate, Vivian has been involved in numerous events on campus. She is a former co-director, and teacher, of Stanford Splash, which brings middle and high school students to campus to learn from Stanford students. Vivian has taught Splash courses since 2014. Her most recent course was "Sewable Electronics."

Vivian has also been a teaching assistant for two of the department's most popular courses, "An Intro to Making: What is EE" and "Digital Systems Design". She was selected through a competitive process, to be a peer tutor in math and physics. Vivian also participated in EE's REU program, doing research, and eventually co-authoring a paper with Professor Jim Harris. Vivian has also worked as an undergraduate research assistant for Professor Amin Arbabian.

"I am grateful for the research and other experiences Stanford has provided me thus far and look forward to the scientific and cultural opportunities provided through the Churchill Scholarship," Wang said.

The goal of the Churchill Scholarships program, established at the request of Sir Winston Churchill, is to advance science and technology on both sides of the Atlantic, helping to ensure future prosperity and security.

 

Excerpted from Stanford News article, "Stanford electrical engineering senior wins Churchill Scholarship"

March 2017

Kristen Lurie (PhD '16) and Audrey Bowden authored a paper published in Biomedical Optics Express that presents a computational method to reconstruct and visualize a 3D model of organs from an endoscopic video that captures the shape and surface appearance of the organ.

Although the team developed the technique for the bladder, it could be applied to other hollow organs where doctors routinely perform endoscopy, including the stomach or colon.

"We were the first group to achieve complete 3D bladder models using standard clinical equipment, which makes this research ripe for rapid translation to clinical practice," states Kristen Lurie (EE PhD, '16), lead author on the paper.

"The beauty of this project is that we can take data that doctors are already collecting," states Audrey.

One of the technique's advantages is that doctors don't have to buy new hardware or modify their techniques significantly. Through the use of advanced computer vision algorithms, the team reconstructed the shape and internal appearance of a bladder using the video footage from a routine cystoscopy, which would ordinarily have been discarded or not recorded in the first place.

"In endoscopy, we generate a lot of data, but currently they're just tossed away," said Joseph Liao, professor of Urology and co-author. According to Liao, these three-dimensional images could help doctors prepare for surgery. Lesions, tumors and scars in the bladder are hard to find, both initially and during surgery.

This technique is the first of its kind and still has room for improvement, the researchers said. Primarily, the three-dimensional models tend to flatten out bumps on the bladder wall, including tumors. With the model alone, this may make tumors harder to spot. The team is now working to advance the realism, in shape and detail, of the models.

Future directions, according to the researchers, include using the algorithm for disease and cancer monitoring within the bladder over time to detect subtle changes, as well as combining it with other imaging technologies.

 

Read Paper

 

 

Excerpted from Stanford News, "Stanford scientists create three-dimensional bladder reconstruction"

 

Pages

Subscribe to RSS - student