SCIEN Talk

SCIEN & EE 292E: Mobile VR for vision testing and treatment

Topic: 
Mobile VR for vision testing and treatment
Abstract / Description: 

Consumer-level HMDs are adequate for many medical applications. Vivid Vision (VV) takes advantage of their low cost, light weight, and large VR gaming code base to make vision tests and treatments. The company's software is built using the Unity engine, which allows for multiplatform support.in the Unity framework, allowing it to run on many hardware platforms. New headsets are available every six months or less, which creates interesting challenges within in the medical device space. VV's flagship product is the commercially available Vivid Vision System, used by more than 120 clinics to test and treat binocular dysfunctions such as convergence difficulties, amblyopia, strabismus, and stereo blindness. VV has recently developed a new, VR-based visual field analyzer.

Date and Time: 
Wednesday, June 6, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE 292E: Emerging LIDAR concepts and sensor technologies for autonomous vehicles

Topic: 
Emerging LIDAR concepts and sensor technologies for autonomous vehicles
Abstract / Description: 

Sensor technologies such as radar, camera, and LIDAR have become the key enablers for achieving higher levels of autonomous control in vehicles, from fleets to commercial. There are, however, still questions remaining: to what extent will radar and camera technologies continue to improve, and which LIDAR concepts will be the most successful? This presentation will provide an overview of the tradeoffs for LIDAR vs. competing sensor technologies (camera and radar); this discussion will reinforce the need for sensor fusion. We will also discuss the types of improvements that are necessary for each sensor technology. The presentation will summarize and compare various LIDAR designs -- mechanical, flash, MEMS-mirror based, optical phased array, and FMCW (frequency modulated continuous wave) -- and then discuss each LIDAR concept's future outlook. Finally, there will be a quick review of guidelines for selecting photonic components such as photodetectors, light sources, and MEMS mirrors.

Date and Time: 
Wednesday, May 30, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE 292E: LiDAR Technology for Autonomous Vehicles

Topic: 
LiDAR Technology for Autonomous Vehicles
Abstract / Description: 

LiDAR is a key sensor for autonomous vehicles that enables them to understand their surroundings in 3 dimensions. I will discuss the evolution of LiDAR, and describe various LiDAR technologies currently being developed. These include rotating sensors, MEMs and Optical Phase Array scanning devices, flash detector arrays, and single photon avalanche detectors. Requirements for autonomous vehicles are very challenging, and the different technologies each have advantages and disadvantages that will be discussed. The architecture of LiDAR also affects how it fits into the overall vehicle architecture. Image fusion with other sensors including radar, cameras, and ultrasound will be part of the overall solution. Other LiDAR applications including non-automotive transportation, mining, precision agriculture, UAV's, mapping, surveying, and security will be described.

Date and Time: 
Wednesday, May 23, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE 292E: Pushing the Limits of Fluorescence Microscopy with adaptive imaging and machine learning

Topic: 
Pushing the Limits of Fluorescence Microscopy with adaptive imaging and machine learning
Abstract / Description: 

Fluorescence microscopy lets biologist see and understand the intricate machinery at the heart of living systems and has led to numerous discoveries. Any technological progress towards improving image quality would extend the range of possible observations and would consequently open up the path to new findings. I will show how modern machine learning and smart robotic microscopes can push the boundaries of observability. One fundamental obstacle in microscopy takes the form of a trade-of between imaging speed, spatial resolution, light exposure, and imaging depth. We have shown that deep learning can circumvent these physical limitations: microscopy images can be restored even if 60-fold fewer photons are used during acquisition, isotropic resolution can be achieved even with a 10-fold under-sampling along the axial direction, and diffraction-limited structures can be resolved at 20-times higher frame-rates compared to state-of-the-art methods. Moreover, I will demonstrate how smart microscopy techniques can achieve the full optical resolution of light-sheet microscopes — instruments capable of capturing the entire developmental arch of an embryo from a single cell to a fully formed motile organism. Our instrument improves spatial resolution and signal strength two to five-fold, recovers cellular and sub-cellular structures in many regions otherwise not resolved, adapts to spatiotemporal dynamics of genetically encoded fluorescent markers and robustly optimises imaging performance during large-scale morphogenetic changes in living organisms.

Date and Time: 
Wednesday, May 16, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE 292E: Advances in automotive image sensors

Topic: 
Advances in automotive image sensors
Abstract / Description: 

In this talk I present recent advances in 2D and 3D image sensors for automotive applications such as rear view cameras, surround view cameras, ADAS cameras and in cabin driver monitoring cameras. This includes developments in high dynamic range image capture, LED flicker mitigation, high frame rate capture, global shutter, near infrared sensitivity and range imaging. I will also describe sensor developments for short range and long range LIDAR systems.

Date and Time: 
Wednesday, May 9, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE 292E: 3D single-molecule super-resolution microscopy using a tilted light sheet

Topic: 
3D single-molecule super-resolution microscopy using a tilted light sheet
Abstract / Description: 

To obtain a complete picture of subcellular structures, cells must be imaged with high resolution in all three dimensions (3D). In this talk, I will present tilted light sheet microscopy with 3D point spread functions (TILT3D), an imaging platform that combines a novel, tilted light sheet illumination strategy with engineered long axial range point spread functions (PSFs) for low-background, 3D super localization of single molecules as well as 3D super-resolution imaging in thick cells. Here the axial positions of the single molecules are encoded in the shape of the PSF rather than in the position or thickness of the light sheet. TILT3D is built upon a standard inverted microscope and has minimal custom parts. The result is simple and flexible 3D super-resolution imaging with tens of nm localization precision throughout thick mammalian cells. We validated TILT3D for 3D super-resolution imaging in mammalian cells by imaging mitochondria and the full nuclear lamina using the double-helix PSF for single-molecule detection and the recently developed Tetrapod PSFs for fiducial bead tracking and live axial drift correction. We think that TILT3D in the future will become an important tool not only for 3D super-resolution imaging, but also for live whole-cell single-particle and single-molecule tracking.

Date and Time: 
Wednesday, May 2, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE292E seminar: Video Coding before and beyond HEVC

Topic: 
Video Coding before and beyond HEVC
Abstract / Description: 

We are enjoying video contents in various situations. Though they are already compressed down to 1/10 - 1/1000 from its original size, it has been reported that video traffic over the internet is increasing 31% per year, within which the video traffic will occupy 82% by 2020. This is why development of better compression technology is eagerly demanded. ITU-T/ISO/IEC jointly developed the latest video coding standard, High Efficiency Video Coding (HEVC), in 2013. They are about to start next generation standard. Corresponding proposals will be evaluated at April 2018 meeting in San Diego, just a week before this talk.

In this talk, we will first overview the advances of video coding technology in the last several decades, latest topics including the report of the San Diego meeting, some new approaches including deep learning technique etc. will be presented.

Date and Time: 
Wednesday, April 25, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE292E seminar: Transport-Aware Cameras

Topic: 
Transport-Aware Cameras
Abstract / Description: 

Conventional cameras record all light falling onto their sensor regardless of its source or its 3D path to the camera. In this talk I will present a emerging family of coded-exposure video cameras that can be programmed to record just a fraction of the light coming from an artificial source---be it a common street lamp or a programmable projector---based on the light path's geometry or timing. Live video from these cameras offers a very unconventional view of our everyday world in which refraction and scattering can be notice with the naked eye can become apparent, and the flicker of electric lights can be turned into a powerful cue for analyzing the electrical grid from room to city.

I will discuss the unique optical properties and power efficiency of these "transport aware cameras" through three case studies: the ACam for analyzing the electrical grid, EpiScan3D for robust progress toward designing a computational CMOS sensor for coded two-bucket imaging---a novel capability that promises much more flexible and powerful transport-aware cameras compared to existing off-the-shelf solutions.

Date and Time: 
Wednesday, April 18, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE292E seminar: Light-field Display Architecture and the Heterogenous Display Ecosystem FoVI3D

Topic: 
Light-field Display Architecture and the Heterogenous Display Ecosystem FoVI3D
Abstract / Description: 

Human binocular vision and acuity, and the accompanying 3D retinal processing of the human eye and brain are specifically designed to promote situational awareness and understanding in the natural 3D world. The ability to resolve depth within a scene whether natural or artificial improves our spatial understanding of the scene and as a result reduces the cognitive load accompanying the analysis and collaboration on complex tasks.

A light-field display projects 3D imagery that is visible to the unaided eye (without glasses or head tracking) and allows for perspective correct visualization within the display's projection volume. Binocular disparity, occlusion, specular highlights and gradient shading, and other expected depth cues are correct from the viewer's perspective as in the natural real-world light-field.

Light-field displays are no longer a science fiction concept and a few companies are producing impressive light-field display prototypes. This presentation will review:
· The application agnostic light-field display architecture being developed at FoVI3D.
· General light-field display properties and characteristics such as field of view, directional resolution, and their effect on the 3D aerial image.
· The computation challenge for generating high-fidelity light-fields.
· A display agnostic ecosystem.

Demo after the talk: The FoVI3D Light-field Display Developer Kit (LfD DK2) is a prototype, wide field-of-view, full parallax, monochrome light-field display capable of projecting ~100,000,000 million unique rays to fill a 9cm x 9cm x 9cm projection volume. The particulars of the light-field compute, photonics subsystem and hogel optics will be discussed during the presentation.

Date and Time: 
Wednesday, April 11, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN presents Video-based Reconstruction of the Real World in Motion

Topic: 
Video-based Reconstruction of the Real World in Motion
Abstract / Description: 

New methods for capturing highly detailed models of moving real world scenes with cameras, i.e., models of detailed deforming geometry, appearance or even material properties, become more and more important in many application areas. They are needed in visual content creation, for instance in visual effects, where they are needed to build highly realistic models of virtual human actors. Further on, efficient, reliable and highly accurate dynamic scene reconstruction is nowadays an important prerequisite for many other application domains, such as: human-computer and human-robot interaction, autonomous robotics and autonomous driving, virtual and augmented reality, 3D and free-viewpoint TV, immersive telepresence, and even video editing.

The development of dynamic scene reconstruction methods has been a long standing challenge in computer graphics and computer vision. Recently, the field has seen important progress. New methods were developed that capture - without markers or scene instrumentation - rather detailed models of individual moving humans or general deforming surfaces from video recordings, and capture even simple models of appearance and lighting. However, despite this recent progress, the field is still at an early stage, and current technology is still starkly constrained in many ways. Many of today's state-of-the-art methods are still niche solutions that are designed to work under very constrained conditions, for instance: only in controlled studios, with many cameras, for very specific object types, for very simple types of motion and deformation, or at processing speeds far from real-time.

In this talk, I will present some of our recent works on detailed marker-less dynamic scene reconstruction and performance capture in which we advanced the state of the art in several ways. For instance, I will briefly show new methods for marker-less capture of the full body (like our VNECT approach) and hands that work in more general environments, and even in real-time and with one camera. I will then show some of our work on high-quality face performance capture and face reenactment. Here, I will also illustrate the benefits of both model-based and learning-based approaches and show how different ways to join the forces of the two open up new possibilities. Live demos included!

Date and Time: 
Wednesday, March 21, 2018 - 4:30pm
Venue: 
Packard 101

Pages

Subscribe to RSS - SCIEN Talk