SCIEN Talk

SCIEN Talk

Topic: 
Human-centric optical design: a key for next generation AR and VR optics
Abstract / Description: 

The ultimate wearable display is an information device that people can use all day. It should be as forgettable as a pair of glasses or a watch, but more useful than a smart phone. It should be small, light, low-power, high-resolution and have a large field of view (FOV). Oh, and one more thing, it should be able to switch from VR to AR.

These requirements pose challenges for hardware and, most importantly, optical design. In this talk, I will review existing AR and VR optical architectures and explain why it is difficult to create a small, light and high-resolution display that has a wide FOV. Because comfort is king, new optical designs for the next-generation AR and VR system should be guided by an understanding of the capabilities and limitations of the human visual system.

Date and Time: 
Wednesday, October 5, 2016 - 4:30pm to 5:30pm
Venue: 
Packard 101

Claude E. Shannon's 100th Birthday

Topic: 
Centennial year of the 'Father of the Information Age'
Abstract / Description: 

From UCLA Shannon Centennial Celebration website:

Claude Shannon was an American mathematician, electrical engineer, and cryptographer known as "the father of information theory". Shannon founded information theory and is perhaps equally well known for founding both digital computer and digital circuit design theory. Shannon also laid the foundations of cryptography and did basic work on code breaking and secure telecommunications.

 

Events taking place around the world are listed at IEEE Information Theory Society.

Date and Time: 
Saturday, April 30, 2016 - 12:00pm
Venue: 
N/A

SCIEN

Topic: 
Machine learning for large-scale image understanding
Abstract / Description: 

The recent progress in recognizing visual objects and annotating images has been driven by super-rich models and massive datasets. However, machine vision models still have a very limited 'understanding' of images, rendering them brittle when attempting to generalize to unseen examples. I will describe recent efforts to improve the robustness and accuracy of systems for annotating and retrieving images, first, by using structure in the space of images and fusing various types of information about image labels, and second, by matching structures in visual scenes to structures in their corresponding language descriptions or queries. We apply these approaches to billions of queries and images, to improve search and annotation of public images and personal photos.

Date and Time: 
Wednesday, May 11, 2016 - 4:15pm to 5:15pm
Venue: 
Packard 101

SCIEN

Topic: 
Learning the image processing pipeline
Abstract / Description: 

Many creative ideas are being proposed for image sensor designs, and these may be useful in applications ranging from consumer photography to computer vision. To understand and evaluate each new design, we must create a corresponding image-processing pipeline that transforms the sensor data into a form that is appropriate for the application. The need to design and optimize these pipelines is time-consuming and costly. I explain a method that combines machine learning and image systems simulation that automates the pipeline design. The approach is based on a new way of thinking of the image-processing pipeline as a large collection of local linear filters. Finally, I illustrate how the method has been used to design pipelines for consumer photography and mobile imaging.

Date and Time: 
Wednesday, April 27, 2016 - 4:15pm to 5:15pm
Venue: 
Packard 101

SCIEN

Topic: 
Extreme Computational Photography
Abstract / Description: 

The Camera Culture Group at the MIT Media Lab aims to create a new class of imaging platforms. This talk will discuss three tracks of research: femto photography, retinal imaging, and 3D displays.
Femto Photography consists of femtosecond laser illumination, picosecond-accurate detectors and mathematical reconstruction techniques allowing researchers to visualize propagation of light. Direct recording of reflected or scattered light at such a frame rate with sufficient brightness is nearly impossible. Using an indirect 'stroboscopic' method that records millions of repeated measurements by careful scanning in time and viewpoints we can rearrange the data to create a 'movie' of a nanosecond long event. Femto photography and a new generation of nano-photography (using ToF cameras) allow powerful inference with computer vision in presence of scattering.

EyeNetra is a mobile phone attachment that allows users to test their own eyesight. The device reveals corrective measures thus bringing vision to billions of people who would not have had access otherwise. Another project, eyeMITRA, is a mobile retinal imaging solution that brings retinal exams to the realm of routine care, by lowering the cost of the imaging device to a 10th of its current cost and integrating the device with image analysis software and predictive analytics. This provides early detection of Diabetic Retinopathy that can change the arc of growth of the world's largest cause of blindness.

Finally the talk will describe novel lightfield cameras and lightfield displays that require a compressive optical architecture to deal with high bandwidth requirements of 4D signals.

Date and Time: 
Wednesday, April 20, 2016 - 4:15pm to 5:15pm
Venue: 
Packard 101

SCIEN Talk

Topic: 
Compressive light-field microscopy for 3D functional imaging of the living brain
Abstract / Description: 

We present a new microscopy technique for 3D functional neuroimaging in live brain tissue. The device is a simple light field fluorescence microscope allowing full volume acquisition in a single shot and can be miniaturized into a portable implant. Our computational methods first rely on spatial and temporal sparsity of fluorescence signals to identify and precisely localize neurons. We compute for each neuron a unique pattern, the light-field signature, that accounts for the effects of optical scattering and aberrations. The technique then yields a precise localization of active neurons and enables quantitative measurement of fluorescence with individual neuron spatial resolution and at high speeds, all without ever reconstructing a volume image. Experimental results are shown on live Zebrafish.

More Information: www.nicolaspegard.com

Date and Time: 
Wednesday, April 6, 2016 - 4:15pm to 5:15pm
Venue: 
Packard 101

SCIEN

Topic: 
Cinematic Virtual Reality: Creating Immersive Visual Experiences
Abstract / Description: 

Historically, virtual reality (VR) with head-mounted displays (HMDs) is associated with computer-generated content and gaming applications. However, recent advances in 360 degree cameras facilitate omnidirectional capture of real-world environments to create content to be viewed on HMDs - a technology referred to as cinematic VR. This can be used to immerse the user, for instance, in a concert or sports event. The main focus of this talk will be on data representations for creating such immersive experiences.

In cinematic VR, videos are usually represented in a spherical format to account for all viewing directions. To achieve high-quality streaming of such videos to millions of users, it is crucial to consider efficient representations for this type of data, in order to maximize compression efficiency under resource constraints, such as the number of pixels and bitrate. We formulate the choice of representation as a multi-dimensional, multiple choice knapsack problem and show that the resulting representations adapt well to varying content.

Existing cinematic VR systems update the viewports according to head rotation, but do not support head translation or focus cues. We propose a new 3D video representation, referred to as depth augmented stereo panorama, to address this issue. We show that this representation can successfully induce head-motion parallax in a predefined operating range, as well as generate light fields across the observer's pupils, suitable for using with emerging light field HMDs.

Date and Time: 
Wednesday, March 30, 2016 - 4:15pm to 5:15pm
Venue: 
Packard 101

SCIEN

Topic: 
Learning the image processing pipeline
Abstract / Description: 

Many creative ideas are being proposed for image sensor designs, and these may be useful in applications ranging from consumer photography to computer vision. To understand and evaluate each new design, we must create a corresponding image-processing pipeline that transforms the sensor data into a form that is appropriate for the application. The need to design and optimize these pipelines is time-consuming and costly. I explain a method that combines machine learning and image systems simulation that automates the pipeline design. The approach is based on a new way of thinking of the image-processing pipeline as a large collection of local linear filters. Finally, I illustrate how the method has been used to design pipelines for consumer photography and mobile imaging.

Date and Time: 
Wednesday, March 2, 2016 - 4:15pm to 5:15pm
Venue: 
Packard 101

Pages

Subscribe to RSS - SCIEN Talk