Graduate

ChEM-H Special Seminar: Inverse Problems and Unsupervised Learning with applications to Cryo-Electron Microscopy

Topic: 
Inverse Problems and Unsupervised Learning with applications to Cryo-Electron Microscopy
Abstract / Description: 

Cryo-Electron Microscopy (cryo-EM) is an imaging technology that is revolutionizing structural biology; the Nobel Prize in Chemistry 2017 was recently awarded to Jacques Dubochet, Joachim Frank and Richard Henderson "for developing cryo-electron microscopy for the high-resolution structure determination of biomolecules in solution".

Cryo-electron microscopes produce a large number of very noisy two-dimensional projection images of individual frozen molecules. Unlike related methods, such as computed tomography (CT), the viewing direction of each image is unknown. The unknown directions, together with extreme levels of noise and additional technical factors, make the determination of the structure of molecules challenging.

While other methods for structure determination, such as x-ray crystallography and nuclear magnetic resonance (NMR), measure ensembles of molecules, cryo-electron microscopes produce images of individual molecules. Therefore, cryo-EM could potentially be used to study mixtures of different conformations of molecules. Indeed, current algorithms have been very successful at analyzing homogeneous samples, and can recover some distinct conformations mixed in solutions, but, the determination of multiple conformations, and in particular, continuums of similar conformations (continuous heterogeneity), remains one of the open problems in cryo-EM.

I will discuss a one-dimensional discrete model problem, Heterogeneous Multireference Alignment, which captures many of the properties of the cryo-EM problem. I will then discuss different components which we are introducing in order to address the problem of continuous heterogeneity in cryo-EM: 1. "hyper-molecules," the mathematical formulation of truly continuously heterogeneous molecules, 2. Computational and numerical tools for expressing associated priors, and 3. Bayesian algorithms for inverse problems with an unsupervised-learning component for recovering such hyper-molecules in cryo-EM.

Date and Time: 
Thursday, January 25, 2018 - 4:00pm
Venue: 
Allen 101X

SystemX Seminar: Coherent Ising machines for combinatorial optimization - Optical neural networks operating at the quantum limit

Topic: 
Coherent Ising machines for combinatorial optimization - Optical neural networks operating at the quantum limit
Abstract / Description: 

Optimization problems with discrete and continuous variables are ubiquitous in numerous important areas, including operations and scheduling, drug discovery, wireless communications, finance, integrated circuit design, compressed sensing and machine learning. Despite rapid advances in both algorithm and digital computing technology, even modest sized optimization problems that arise in practice may be very difficult to solve on modern digital computers. One alternative of current interest is the adiabatic quantum computing (AQC) or quantum annealing (QA). Sophisticated AQC/QA devices are already under development, but providing dense connectivity between qubits remains a major challenge with serious implications for the efficiency of AQC/QA approaches. In this talk, we will introduce a novel computing system, coherent Ising machine, and describe its theoretical and experimental performance. We start with the physics of quantum-to-classical crossover as a computational mechanism and how to construct such physical devices as quantum neurons and synapses. We show the performance comparison against various classical neural network models implemented in CPU and supercomputers as algorithms. We end the talk by introducing the portal of the QNNCloud service system based on the coherent Ising machines.

Date and Time: 
Monday, January 29, 2018 - 2:00pm
Venue: 
Packard 204

SCIEN Talk: Drone IoT Networks for Virtual Human Teleportation

Topic: 
Drone IoT Networks for Virtual Human Teleportation
Abstract / Description: 

Cyber-physical/human systems (CPS/CHS) are set to play an increasingly visible role in our lives, advancing research and technology across diverse disciplines. I am exploring novel synergies between three emerging CPS/CHS technologies of prospectively broad societal impact, virtual/augmented reality (VR/AR), the Internet of Things (IoT), and autonomous micro-aerial robots (UAVs). My long-term research objective is UAV-IoT-deployed ubiquitous VR/AR immersive communication that can enable virtual human teleportation to any corner of the world. Thereby, we can achieve a broad range of technological and societal advances that will enhance energy conservation, quality of life, and the global economy.
I am investigating fundamental problems at the intersection of signal acquisition and representation, communications and networking, (embedded) sensors and systems, and rigorous machine learning for stochastic control that arise in this context. I envision a future where UAV-IoT-deployed immersive communication systems will break existing barriers in remote sensing, monitoring, localization and mapping, navigation, and scene understanding. The presentation will outline some of my present and envisioned investigations. Interdisciplinary applications will be highlighted.

Date and Time: 
Wednesday, March 14, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Temporal coding of volumetric imagery

Topic: 
Temporal coding of volumetric imagery
Abstract / Description: 

'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned or captured in parallel and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption.

This talk describes systems and methods with which to efficiently detect and visualize image volumes by temporally encoding the extra dimensions' information into 2D measurements or displays. Some highlights of my research include video and 3D recovery from photographs, and true-3D augmented reality image display by time multiplexing. In the talk, I show how temporal optical coding can improve system performance, battery life, and hardware simplicity for a variety of platforms and applications.

Date and Time: 
Wednesday, March 7, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: ChromaBlur: Rendering Chromatic Eye Aberration Improves Accommodation and Realism

Topic: 
ChromaBlur: Rendering Chromatic Eye Aberration Improves Accommodation and Realism
Abstract / Description: 

Computer-graphics engineers and vision scientists want to generate images that reproduce realistic depth-dependent blur. Current rendering algorithms take into account scene geometry, aperture size, and focal distance, and they produce photorealistic imagery as with a high-quality camera. But to create immersive experiences, rendering algorithms should aim instead for perceptual realism. In so doing, they should take into account the significant optical aberrations of the human eye. We developed a method that, by incorporating some of those aberrations, yields displayed images that produce retinal images much closer to the ones that occur in natural viewing. In particular, we create displayed images taking the eye's chromatic aberration into account. This produces different chromatic effects in the retinal image for objects farther or nearer than current focus. We call the method ChromaBlur. We conducted two experiments that illustrate the benefits of ChromaBlur. One showed that accommodation (eye focusing) is driven quite effectively when ChromaBlur is used and that accommodation is not driven at all when conventional methods are used. The second showed that perceived depth and realism are greater with imagery created by ChromaBlur than in imagery created conventionally. ChromaBlur can be coupled with focus-adjustable lenses and gaze tracking to reproduce the natural relationship between accommodation and blur in HMDs and other immersive devices. It can thereby minimize the adverse effects of vergence-accommodation conflicts.

Date and Time: 
Wednesday, February 28, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Data-driven Computational Imaging

Topic: 
Data-driven Computational Imaging
Abstract / Description: 

Between ever increasing pixel counts, ever cheaper sensors, and the ever expanding world-wide-web, natural image data has become plentiful. These vast quantities of data, be they high frame rate videos or huge curated datasets like Imagenet, stand to substantially improve the performance and capabilities of computational imaging systems. However, using this data efficiently presents its own unique set of challenges. In this talk I will use data to develop better priors, improve reconstructions, and enable new capabilities for computational imaging systems.

Date and Time: 
Wednesday, February 21, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Accelerated Computing for Light Field and Holographic Displays

Topic: 
Accelerated Computing for Light Field and Holographic Displays
Abstract / Description: 

In this talk, I will present two recently published papers at the annual SIGGRAPH ASIA 2017. For the first paper, we present a 4D light field sampling and rendering system for light field displays that can support both foveation and accommodation to reduce rendering cost while maintaining perceptual quality and comfort. For the second paper, we present a light field based Computer Generated Holography (CGH) rendering pipeline allowing for reproduction of high-definition 3D scenes with continuous depth and support of intra-pupil view dependent occlusion using computer generated hologram. Our rendering and Fresnel integral accurately accounts for diffraction and supports various types of reference illumination for holograms.

Date and Time: 
Wednesday, February 14, 2018 - 4:30pm
Venue: 
Packard 101

EE380 Computer Systems Colloquium: Computational Memory

Topic: 
Computational Memory
Abstract / Description: 

Description TBA


The Stanford EE Computer Systems Colloquium (EE380) meets on Wednesdays 4:30-5:45 throughout the academic year. Talks are given before a live audience in Room B03 in the basement of the Gates Computer Science Building on the Stanford Campus. The live talks (and the videos hosted at Stanford and on YouTube) are open to the public.

Date and Time: 
Wednesday, March 7, 2018 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Blockchain Technology

Topic: 
Blockchain Technology
Abstract / Description: 

Description TBA


The Stanford EE Computer Systems Colloquium (EE380) meets on Wednesdays 4:30-5:45 throughout the academic year. Talks are given before a live audience in Room B03 in the basement of the Gates Computer Science Building on the Stanford Campus. The live talks (and the videos hosted at Stanford and on YouTube) are open to the public.

Date and Time: 
Wednesday, February 14, 2018 - 4:30pm
Venue: 
Gates B03

EE380 Computer Systems Colloquium: Computer Architectures and Data Security

Topic: 
Computer Architectures and Data Security
Abstract / Description: 

Microprocessor side channel attacks: Meltdown, Spectre, and more


The Stanford EE Computer Systems Colloquium (EE380) meets on Wednesdays 4:30-5:45 throughout the academic year. Talks are given before a live audience in Room B03 in the basement of the Gates Computer Science Building on the Stanford Campus. The live talks (and the videos hosted at Stanford and on YouTube) are open to the public.

Date and Time: 
Wednesday, January 31, 2018 - 4:30pm
Venue: 
Gates B03

Pages

Subscribe to RSS - Graduate