SCIEN Talk

SCIEN Talk: Focal Surface Displays

Topic: 
Focal Surface Displays
Abstract / Description: 

Conventional binocular head-mounted displays (HMDs) vary the stimulus to vergence with the information in the picture, while the stimulus to accommodation remains fixed at the apparent distance of the display, as created by the viewing optics. Sustained vergence-accommodation conflict (VAC) has been associated with visual discomfort, motivating numerous proposals for delivering near-correct accommodation cues. We introduce focal surface displays to meet this challenge, augmenting conventional HMDs with a phase-only spatial light modulator (SLM) placed between the display screen and viewing optics. This SLM acts as a dynamic freeform lens, shaping synthesized focal surfaces to conform to the virtual scene geometry. We introduce a framework to decompose target focal stacks and depth maps into one or more pairs of piecewise smooth focal surfaces and underlying display images. We build on recent developments in "optimized blending" to implement a multifocal display that allows the accurate depiction of occluding, semi-transparent, and reflective objects. Practical benefits over prior accommodation-supporting HMDs are demonstrated using a binocular focal surface display employing a liquid crystal on silicon (LCOS) phase SLM and an organic light-emitting diode (OLED) display.

Date and Time: 
Wednesday, October 11, 2017 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Computational Near-Eye Displays

Topic: 
Computational Near-Eye Displays
Abstract / Description: 

Virtual reality is a new medium that provides unprecedented user experiences. Eventually, VR/AR systems will redefine communication, entertainment, education, collaborative work, simulation, training, telesurgery, and basic vision research. In all of these applications, the primary interface between the user and the digital world is the near-eye display. While today's VR systems struggle to provide natural and comfortable viewing experiences, next-generation computational near-eye displays have the potential to provide visual experiences that are better than the real world. In this talk, we explore the frontiers of VR/AR systems engineering and discuss next-generation near-eye display technology, including gaze-contingent focus, light field displays, monovision, holographic near-eye displays, and accommodation-invariant near-eye displays.

Date and Time: 
Wednesday, October 4, 2017 - 4:30pm
Venue: 
Packard 101

Computational Imaging for Robotic Vision [SCIEN]

Topic: 
Computational Imaging for Robotic Vision
Abstract / Description: 

This talk argues for combining the fields of robotic vision and computational imaging. Both consider the joint design of hardware and algorithms, but with dramatically different approaches and results. Roboticists seldom design their own cameras, and computational imaging seldom considers performance in terms of autonomous decision-making.The union of these fields considers whole-system design from optics to decisions. This yields impactful sensors offering greater autonomy and robustness, especially in challenging imaging conditions. Motivating examples are drawn from autonomous ground and underwater robotics, and the talk concludes with recent advances in the design and evaluation of novel cameras for robotics applications.

Date and Time: 
Wednesday, June 7, 2017 - 4:30pm
Venue: 
Packard 101

Carl Zeiss Smart Glasses [SCIEN Talk]

Topic: 
Carl Zeiss Smart Glasses
Abstract / Description: 

Kai Stroeder, Managing Director at Carl Zeiss Smart Optics GmbH, will talk about the Carl Zeiss Smart Glasses.

This will be an informal session with an introduction and prototype demo of the Smart Glasses and an open discussion about future directions and applications.

Date and Time: 
Tuesday, May 30, 2017 - 10:00am
Venue: 
Lucas Center for Imaging, P083

FusionNet: 3D Object Classification Using Multiple Data Representations [SCIEN]

Topic: 
FusionNet: 3D Object Classification Using Multiple Data Representations
Abstract / Description: 

High-quality 3D object recognition is an important component of many vision and robotics systems. We tackle the object recognition problem using two data representations: Volumetric representation, where the 3D object is discretized spatially as binary voxels - 1 if the voxel is occupied and 0 otherwise. Pixel representation where the 3D object is represented as a set of projected 2D pixel images. At the time of submission, we obtained leading results on the Princeton ModelNet challenge. Some of the best deep learning architectures for classifying 3D CAD models use Convolutional Neural Networks (CNNs) on pixel representation, as seen on the ModelNet leaderboard. Diverging from this trend, we combine both the above representations and exploit them to learn new features. This approach yields a significantly better classifier than using either of the representations in isolation. To do this, we introduce new Volumetric CNN (V-CNN) architectures.

Date and Time: 
Wednesday, May 31, 2017 - 4:30pm
Venue: 
Packard 101

Hyperspectral Imaging Using Polarization Interferometry [SCIEN]

Topic: 
Hyperspectral Imaging Using Polarization Interferometry
Abstract / Description: 

Polarization interferometers are interferometers that utilize birefringent crystals in order to generate an optical path delay between two polarizations of light. In this talk I will describe how I have employed polarization interferometry to make two kinds of Fourier imaging spectrometers; in one case, by temporally scanning the optical path delay with a liquid crystal cell, and in the other, utilizing relative motion between scene and detector to spatially scan the optical path delay through a position-dependent wave plate.

Date and Time: 
Wednesday, May 17, 2017 - 4:30pm
Venue: 
Packard 101

Heterogeneous Computational Imaging [SCIEN Talk]

Topic: 
Heterogeneous Computational Imaging
Abstract / Description: 

Modern systems-on-a-chip (SoC) have many different types of processors that could be used in computational imaging. Unfortunately, they all have different programming models, and are thus difficult to optimize as a system. In this talk we discuss various standards (OpenCL, OpenVX) and domain-specific programming languages (Halide, Proximal) that make it easier to accelerate processing for computational imaging.

Date and Time: 
Wednesday, May 3, 2017 - 4:30pm
Venue: 
Packard 101

Deep Learning Imaging Applications [SCIEN Talk]

Topic: 
Deep Learning Imaging Applications
Abstract / Description: 

Deep learning has driven huge progress in visual object recognition in the last five years, but this is one aspect of its application to imaging. This talk will provided a brief overview deep learning and artificial neural networks in computer vision, before delving into wide range of application Google has pursued in this area. Topics will include image summarization, image augmentation, artistic style transfer, and medical diagnostics.

Date and Time: 
Wednesday, April 26, 2017 - 4:30pm
Venue: 
Packard 101

Monitorless Workspaces and Operating Rooms of the Future: Virtual/Augmented Reality through Multiharmonic Lock-In Amplifiers [SCIEN Talk]

Topic: 
Monitorless Workspaces and Operating Rooms of the Future: Virtual/Augmented Reality through Multiharmonic Lock-In Amplifiers
Abstract / Description: 

In my childhood I invented a new kind of lock-in amplifier and used it as the basis for the world's first wearable augmented reality computer (http://wearcam.org/par). This allowed me to see radio waves, sound waves, and electrical signals inside the human body, all aligned perfectly with the physical space in which they were present. I built this equipment into special electric eyeglasses that automatically adjusted their convergence and focus to match their surroundings. By shearing the spacetime continuum one sees a stroboscopic vision in coordinates in which the speed of light, sound, or wave propagation is exactly zero (http://wearcam.org/kineveillance.pdf), or slowed down, making these signals visible to radio engineers, sound engineers, neurosurgeons, and the like. See the attached picture of a violin attached to the desk in my office at Meta, where we're creating the future of computing based on Human-in-the-Loop Intelligence (https://en.wikipedia.org/wiki/Humanistic_intelligence).

 

More Information: http://weartech.com/bio.htm

Date and Time: 
Wednesday, April 19, 2017 - 4:30pm
Venue: 
Packard 101

Capturing the “Invisible”: Computational Imaging for Robust Sensing and Vision [SCIEN]

Topic: 
Capturing the “Invisible”: Computational Imaging for Robust Sensing and Vision
Abstract / Description: 

Imaging has become an essential part of how we communicate with each other, how autonomous agents sense the world and act independently, and how we research chemical reactions and biological processes. Today's imaging and computer vision systems, however, often fail in critical scenarios, for example in low light or in fog. This is due to ambiguity in the captured images, introduced partly by imperfect capture systems, such as cellphone optics and sensors, and partly present in the signal before measuring, such as photon shot noise. This ambiguity makes imaging with conventional cameras challenging, e.g. low-light cellphone imaging, and it makes high-level computer vision tasks difficult, such as scene segmentation and understanding.

In this talk, I will present several examples of algorithms that computationally resolve this ambiguity and make sensing and vision systems robust. These methods rely on three key ingredients: accurate probabilistic forward models, learned priors, and efficient large-scale optimization methods. In particular, I will show how to achieve better low-light imaging using cell-phones (beating Google's HDR+), and how to classify images at 3 lux (substantially outperforming very deep convolutional networks, such as the Inception-v4 architecture). Using a similar methodology, I will discuss ways to miniaturize existing camera systems by designing ultra-thin, focus-tunable diffractive optics. Finally, I will present new exotic imaging modalities which enable new applications at the forefront of vision and imaging, such as seeing through scattering media and imaging objects outside direct line of sight.

Date and Time: 
Wednesday, April 12, 2017 - 4:30pm
Venue: 
Packard 101

Pages

Subscribe to RSS - SCIEN Talk