SCIEN Talk

SCIEN Talk: Drone IoT Networks for Virtual Human Teleportation

Topic: 
Drone IoT Networks for Virtual Human Teleportation
Abstract / Description: 

Cyber-physical/human systems (CPS/CHS) are set to play an increasingly visible role in our lives, advancing research and technology across diverse disciplines. I am exploring novel synergies between three emerging CPS/CHS technologies of prospectively broad societal impact, virtual/augmented reality (VR/AR), the Internet of Things (IoT), and autonomous micro-aerial robots (UAVs). My long-term research objective is UAV-IoT-deployed ubiquitous VR/AR immersive communication that can enable virtual human teleportation to any corner of the world. Thereby, we can achieve a broad range of technological and societal advances that will enhance energy conservation, quality of life, and the global economy.
I am investigating fundamental problems at the intersection of signal acquisition and representation, communications and networking, (embedded) sensors and systems, and rigorous machine learning for stochastic control that arise in this context. I envision a future where UAV-IoT-deployed immersive communication systems will break existing barriers in remote sensing, monitoring, localization and mapping, navigation, and scene understanding. The presentation will outline some of my present and envisioned investigations. Interdisciplinary applications will be highlighted.

Date and Time: 
Wednesday, March 14, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Temporal coding of volumetric imagery

Topic: 
Temporal coding of volumetric imagery
Abstract / Description: 

'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned or captured in parallel and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption.

This talk describes systems and methods with which to efficiently detect and visualize image volumes by temporally encoding the extra dimensions' information into 2D measurements or displays. Some highlights of my research include video and 3D recovery from photographs, and true-3D augmented reality image display by time multiplexing. In the talk, I show how temporal optical coding can improve system performance, battery life, and hardware simplicity for a variety of platforms and applications.

Date and Time: 
Wednesday, March 7, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: ChromaBlur: Rendering Chromatic Eye Aberration Improves Accommodation and Realism

Topic: 
ChromaBlur: Rendering Chromatic Eye Aberration Improves Accommodation and Realism
Abstract / Description: 

Computer-graphics engineers and vision scientists want to generate images that reproduce realistic depth-dependent blur. Current rendering algorithms take into account scene geometry, aperture size, and focal distance, and they produce photorealistic imagery as with a high-quality camera. But to create immersive experiences, rendering algorithms should aim instead for perceptual realism. In so doing, they should take into account the significant optical aberrations of the human eye. We developed a method that, by incorporating some of those aberrations, yields displayed images that produce retinal images much closer to the ones that occur in natural viewing. In particular, we create displayed images taking the eye's chromatic aberration into account. This produces different chromatic effects in the retinal image for objects farther or nearer than current focus. We call the method ChromaBlur. We conducted two experiments that illustrate the benefits of ChromaBlur. One showed that accommodation (eye focusing) is driven quite effectively when ChromaBlur is used and that accommodation is not driven at all when conventional methods are used. The second showed that perceived depth and realism are greater with imagery created by ChromaBlur than in imagery created conventionally. ChromaBlur can be coupled with focus-adjustable lenses and gaze tracking to reproduce the natural relationship between accommodation and blur in HMDs and other immersive devices. It can thereby minimize the adverse effects of vergence-accommodation conflicts.

Date and Time: 
Wednesday, February 28, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Data-driven Computational Imaging

Topic: 
Data-driven Computational Imaging
Abstract / Description: 

Between ever increasing pixel counts, ever cheaper sensors, and the ever expanding world-wide-web, natural image data has become plentiful. These vast quantities of data, be they high frame rate videos or huge curated datasets like Imagenet, stand to substantially improve the performance and capabilities of computational imaging systems. However, using this data efficiently presents its own unique set of challenges. In this talk I will use data to develop better priors, improve reconstructions, and enable new capabilities for computational imaging systems.

Date and Time: 
Wednesday, February 21, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Accelerated Computing for Light Field and Holographic Displays

Topic: 
Accelerated Computing for Light Field and Holographic Displays
Abstract / Description: 

In this talk, I will present two recently published papers at the annual SIGGRAPH ASIA 2017. For the first paper, we present a 4D light field sampling and rendering system for light field displays that can support both foveation and accommodation to reduce rendering cost while maintaining perceptual quality and comfort. For the second paper, we present a light field based Computer Generated Holography (CGH) rendering pipeline allowing for reproduction of high-definition 3D scenes with continuous depth and support of intra-pupil view dependent occlusion using computer generated hologram. Our rendering and Fresnel integral accurately accounts for diffraction and supports various types of reference illumination for holograms.

Date and Time: 
Wednesday, February 14, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Street View 2018 - The Newest Generation of Mapping Hardware

Topic: 
Street View 2018 - The Newest Generation of Mapping Hardware
Abstract / Description: 

A brief overview of Street View from it's inception 10 years ago until now will be presented. Street level Imagery has been the prime objective for Google's Street View in the past, and has now migrated into a state-of-the-art mapping platform. Challenges and solutions to the design and fabrication of the imaging system and optimization of hardware to align with specific software post processing will be discussed. Real world challenges of fielding hardware in 80+ countries will also be addressed.

Date and Time: 
Wednesday, February 7, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Learning where to look in 360 environments

Topic: 
Learning where to look in 360 environments
Abstract / Description: 

Many vision tasks require not just categorizing a well-composed human-taken photo, but also intelligently deciding "where to look" in order to get a meaningful observation in the first place. We explore how an agent can anticipate the visual effects of its actions, and develop policies for learning to look around actively---both for the sake of a specific recognition task as well as for generic exploratory behavior. In addition, we examine how a system can learn from unlabeled video to mimic human videographer tendencies, automatically deciding where to look in unedited 360 degree panoramas. Finally, to facilitate 360 video processing, we introduce spherical convolution, which allows application of off-the-shelf deep networks and object detectors to 360 imagery.

Date and Time: 
Wednesday, January 24, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Driverless Anything and the Role of LiDAR

Topic: 
Driverless Anything and the Role of LiDAR
Abstract / Description: 

LiDAR, or light detection and ranging, is a versatile light-based remote sensing technology that has been the subject of a great deal of attention in recent times. It has shown up in a number of media venues, and has even led to public debate about engineering choices of a well-known electric car company, Tesla Motors. During this talk the speaker will provide some background on LiDAR and discuss why it is a key link to the future autonomous vehicle ecosystem as well as its strong connection to power electronics technologies.

Date and Time: 
Wednesday, January 17, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Advancing Healthcare with AI and VR

Topic: 
Advancing Healthcare with AI and VR
Abstract / Description: 

Quality, cost, and accessibility form an iron triangle that has prevented healthcare from achieving accelerated advancement in the last few decades. Improving any one of the three metrics may lead to degradation of the other two. However, thanks to recent breakthroughs in artificial intelligence (AI) and virtual reality (VR), this iron triangle can finally be shattered. In this talk, I will share the experience of developing DeepQ, an AI platform for AI-assisted diagnosis and VR-facilitated surgery. I will present three healthcare initiatives we have undertaken since 2012: Healthbox, Tricorder, and VR surgery, and explain how AI and VR play pivotal roles in improving diagnosis accuracy and treatment effectiveness. And more specifically, how we have dealt with not only big data analytics, but also small data learning, which is typical in the medical domain. The talk concludes with roadmaps and a list of open research issues in signal processing and AI to achieve precision medicine and surgery.

Date and Time: 
Wednesday, January 10, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE 292E: Compressed Ultrafast Photography and Microscopy: Redefining the Limit of Passive Ultrafast Imaging

Topic: 
Compressed Ultrafast Photography and Microscopy: Redefining the Limit of Passive Ultrafast Imaging
Abstract / Description: 

High-speed imaging is an indispensable technology for blur-free observation of fast transient dynamics in virtually all areas including science, industry, defense, energy, and medicine. Unfortunately, the frame rates of conventional cameras are significantly constrained by their data transfer bandwidth and onboard storage. We demonstrate a two-dimensional dynamic imaging technique, compressed ultrafast photography (CUP), which can capture non-repetitive time-evolving events at up to 100 billion fps. Compared with existing ultrafast imaging techniques, CUP has a prominent advantage of measuring an x, y, t (x, y, spatial coordinates; t, time) scene with a single camera snapshot, thereby allowing observation of transient events occurring on a time scale down to tens of picoseconds. Thanks to the CUP technology, for the first time, the human can see light pulses on the fly. Because this technology advances the imaging frame rate by orders of magnitude, we now enter a new regime and open new visions.

In this talk, I will discuss our recent effort to develop a second-generation CUP system and demonstrate its applications at scales from macroscopic to microscopic. For the first time, we imaged photonic Mach cones and captured "Sonic Boom" of light in action. Moreover, by adapting CUP for microscopy, we enabled two-dimensional fluorescence lifetime imaging at an unprecedented speed. The advantage of CUP recording is that even visually simple systems can be scientifically interesting when they are captured at such a high speed. Given CUP's capability, we expect it to find widespread applications in both fundamental and applied sciences including biomedical research.

Date and Time: 
Wednesday, December 6, 2017 - 4:30pm
Venue: 
Packard 101

Pages

Subscribe to RSS - SCIEN Talk