SCIEN Talk

Carl Zeiss Smart Glasses [SCIEN Talk]

Topic: 
Carl Zeiss Smart Glasses
Abstract / Description: 

Kai Stroeder, Managing Director at Carl Zeiss Smart Optics GmbH, will talk about the Carl Zeiss Smart Glasses.

This will be an informal session with an introduction and prototype demo of the Smart Glasses and an open discussion about future directions and applications.

Date and Time: 
Tuesday, May 30, 2017 - 10:00am
Venue: 
Lucas Center for Imaging, P083

FusionNet: 3D Object Classification Using Multiple Data Representations [SCIEN]

Topic: 
FusionNet: 3D Object Classification Using Multiple Data Representations
Abstract / Description: 

High-quality 3D object recognition is an important component of many vision and robotics systems. We tackle the object recognition problem using two data representations: Volumetric representation, where the 3D object is discretized spatially as binary voxels - 1 if the voxel is occupied and 0 otherwise. Pixel representation where the 3D object is represented as a set of projected 2D pixel images. At the time of submission, we obtained leading results on the Princeton ModelNet challenge. Some of the best deep learning architectures for classifying 3D CAD models use Convolutional Neural Networks (CNNs) on pixel representation, as seen on the ModelNet leaderboard. Diverging from this trend, we combine both the above representations and exploit them to learn new features. This approach yields a significantly better classifier than using either of the representations in isolation. To do this, we introduce new Volumetric CNN (V-CNN) architectures.

Date and Time: 
Wednesday, May 31, 2017 - 4:30pm
Venue: 
Packard 101

Hyperspectral Imaging Using Polarization Interferometry [SCIEN]

Topic: 
Hyperspectral Imaging Using Polarization Interferometry
Abstract / Description: 

Polarization interferometers are interferometers that utilize birefringent crystals in order to generate an optical path delay between two polarizations of light. In this talk I will describe how I have employed polarization interferometry to make two kinds of Fourier imaging spectrometers; in one case, by temporally scanning the optical path delay with a liquid crystal cell, and in the other, utilizing relative motion between scene and detector to spatially scan the optical path delay through a position-dependent wave plate.

Date and Time: 
Wednesday, May 17, 2017 - 4:30pm
Venue: 
Packard 101

Heterogeneous Computational Imaging [SCIEN Talk]

Topic: 
Heterogeneous Computational Imaging
Abstract / Description: 

Modern systems-on-a-chip (SoC) have many different types of processors that could be used in computational imaging. Unfortunately, they all have different programming models, and are thus difficult to optimize as a system. In this talk we discuss various standards (OpenCL, OpenVX) and domain-specific programming languages (Halide, Proximal) that make it easier to accelerate processing for computational imaging.

Date and Time: 
Wednesday, May 3, 2017 - 4:30pm
Venue: 
Packard 101

Deep Learning Imaging Applications [SCIEN Talk]

Topic: 
Deep Learning Imaging Applications
Abstract / Description: 

Deep learning has driven huge progress in visual object recognition in the last five years, but this is one aspect of its application to imaging. This talk will provided a brief overview deep learning and artificial neural networks in computer vision, before delving into wide range of application Google has pursued in this area. Topics will include image summarization, image augmentation, artistic style transfer, and medical diagnostics.

Date and Time: 
Wednesday, April 26, 2017 - 4:30pm
Venue: 
Packard 101

Monitorless Workspaces and Operating Rooms of the Future: Virtual/Augmented Reality through Multiharmonic Lock-In Amplifiers [SCIEN Talk]

Topic: 
Monitorless Workspaces and Operating Rooms of the Future: Virtual/Augmented Reality through Multiharmonic Lock-In Amplifiers
Abstract / Description: 

In my childhood I invented a new kind of lock-in amplifier and used it as the basis for the world's first wearable augmented reality computer (http://wearcam.org/par). This allowed me to see radio waves, sound waves, and electrical signals inside the human body, all aligned perfectly with the physical space in which they were present. I built this equipment into special electric eyeglasses that automatically adjusted their convergence and focus to match their surroundings. By shearing the spacetime continuum one sees a stroboscopic vision in coordinates in which the speed of light, sound, or wave propagation is exactly zero (http://wearcam.org/kineveillance.pdf), or slowed down, making these signals visible to radio engineers, sound engineers, neurosurgeons, and the like. See the attached picture of a violin attached to the desk in my office at Meta, where we're creating the future of computing based on Human-in-the-Loop Intelligence (https://en.wikipedia.org/wiki/Humanistic_intelligence).

 

More Information: http://weartech.com/bio.htm

Date and Time: 
Wednesday, April 19, 2017 - 4:30pm
Venue: 
Packard 101

Capturing the “Invisible”: Computational Imaging for Robust Sensing and Vision [SCIEN]

Topic: 
Capturing the “Invisible”: Computational Imaging for Robust Sensing and Vision
Abstract / Description: 

Imaging has become an essential part of how we communicate with each other, how autonomous agents sense the world and act independently, and how we research chemical reactions and biological processes. Today's imaging and computer vision systems, however, often fail in critical scenarios, for example in low light or in fog. This is due to ambiguity in the captured images, introduced partly by imperfect capture systems, such as cellphone optics and sensors, and partly present in the signal before measuring, such as photon shot noise. This ambiguity makes imaging with conventional cameras challenging, e.g. low-light cellphone imaging, and it makes high-level computer vision tasks difficult, such as scene segmentation and understanding.

In this talk, I will present several examples of algorithms that computationally resolve this ambiguity and make sensing and vision systems robust. These methods rely on three key ingredients: accurate probabilistic forward models, learned priors, and efficient large-scale optimization methods. In particular, I will show how to achieve better low-light imaging using cell-phones (beating Google's HDR+), and how to classify images at 3 lux (substantially outperforming very deep convolutional networks, such as the Inception-v4 architecture). Using a similar methodology, I will discuss ways to miniaturize existing camera systems by designing ultra-thin, focus-tunable diffractive optics. Finally, I will present new exotic imaging modalities which enable new applications at the forefront of vision and imaging, such as seeing through scattering media and imaging objects outside direct line of sight.

Date and Time: 
Wednesday, April 12, 2017 - 4:30pm
Venue: 
Packard 101

Workshop on Augmented and Mixed Reality [SCIEN]

Topic: 
Workshop on Augmented and Mixed Reality
Abstract / Description: 

This workshop will bring together scientists and engineers who are advancing sensor technologies, computer vision, machine learning, head-mounted displays and our understanding of human vision, and developers who are creating new and novel applications for augmented and mixed reality in retail, education, science and medicine.

Date and Time: 
Thursday, May 11, 2017 (All day)
Venue: 
Tressider Union

Practical Computer Vision for Self-Driving Cars [SCIEN]

Topic: 
Practical Computer Vision for Self-Driving Cars
Abstract / Description: 

Cruise is developing and testing a fleet of self driving cars on the streets of San Francisco. Getting these cars to drive is a hard engineering and science problem - this talk explains roughly how self driving cars work and how computer vision, from camera hardware to deep learning, helps make a self driving car go.

More Information: https://www.getcruise.com/
see also http://www.theverge.com/2017/1/19/14327954/gm-self-driving-car-cruise-chevy-bolt-video

Date and Time: 
Wednesday, April 5, 2017 - 4:30pm
Venue: 
Packard 101

New Directions in Management Science & Engineering: A Brief History of the Virtual Lab

Topic: 
New Directions in Management Science & Engineering: A Brief History of the Virtual Lab
Abstract / Description: 

Lab experiments have long played an important role in behavioral science, in part because they allow for carefully designed tests of theory, and in part because randomized assignment facilitates identification of causal effects. At the same time, lab experiments have traditionally suffered from numerous constraints (e.g. short duration, small-scale, unrepresentative subjects, simplistic design, etc.) that limit their external validity. In this talk I describe how the web in general—and crowdsourcing sites like Amazon's Mechanical Turk in particular—allow researchers to create "virtual labs" in which they can conduct behavioral experiments of a scale, duration, and realism that far exceed what is possible in physical labs. To illustrate, I describe some recent experiments that showcase the advantages of virtual labs, as well as some of the limitations. I then discuss how this relatively new experimental capability may unfold in the future, along with some implications for social and behavioral science.

Date and Time: 
Thursday, March 16, 2017 - 12:15pm
Venue: 
Packard 101

Pages

Subscribe to RSS - SCIEN Talk