SCIEN Talk

SCIEN presents Video-based Reconstruction of the Real World in Motion

Topic: 
Video-based Reconstruction of the Real World in Motion
Abstract / Description: 

New methods for capturing highly detailed models of moving real world scenes with cameras, i.e., models of detailed deforming geometry, appearance or even material properties, become more and more important in many application areas. They are needed in visual content creation, for instance in visual effects, where they are needed to build highly realistic models of virtual human actors. Further on, efficient, reliable and highly accurate dynamic scene reconstruction is nowadays an important prerequisite for many other application domains, such as: human-computer and human-robot interaction, autonomous robotics and autonomous driving, virtual and augmented reality, 3D and free-viewpoint TV, immersive telepresence, and even video editing.

The development of dynamic scene reconstruction methods has been a long standing challenge in computer graphics and computer vision. Recently, the field has seen important progress. New methods were developed that capture - without markers or scene instrumentation - rather detailed models of individual moving humans or general deforming surfaces from video recordings, and capture even simple models of appearance and lighting. However, despite this recent progress, the field is still at an early stage, and current technology is still starkly constrained in many ways. Many of today's state-of-the-art methods are still niche solutions that are designed to work under very constrained conditions, for instance: only in controlled studios, with many cameras, for very specific object types, for very simple types of motion and deformation, or at processing speeds far from real-time.

In this talk, I will present some of our recent works on detailed marker-less dynamic scene reconstruction and performance capture in which we advanced the state of the art in several ways. For instance, I will briefly show new methods for marker-less capture of the full body (like our VNECT approach) and hands that work in more general environments, and even in real-time and with one camera. I will then show some of our work on high-quality face performance capture and face reenactment. Here, I will also illustrate the benefits of both model-based and learning-based approaches and show how different ways to join the forces of the two open up new possibilities. Live demos included!

Date and Time: 
Wednesday, March 21, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Drone IoT Networks for Virtual Human Teleportation

Topic: 
Drone IoT Networks for Virtual Human Teleportation
Abstract / Description: 

Cyber-physical/human systems (CPS/CHS) are set to play an increasingly visible role in our lives, advancing research and technology across diverse disciplines. I am exploring novel synergies between three emerging CPS/CHS technologies of prospectively broad societal impact, virtual/augmented reality (VR/AR), the Internet of Things (IoT), and autonomous micro-aerial robots (UAVs). My long-term research objective is UAV-IoT-deployed ubiquitous VR/AR immersive communication that can enable virtual human teleportation to any corner of the world. Thereby, we can achieve a broad range of technological and societal advances that will enhance energy conservation, quality of life, and the global economy.
I am investigating fundamental problems at the intersection of signal acquisition and representation, communications and networking, (embedded) sensors and systems, and rigorous machine learning for stochastic control that arise in this context. I envision a future where UAV-IoT-deployed immersive communication systems will break existing barriers in remote sensing, monitoring, localization and mapping, navigation, and scene understanding. The presentation will outline some of my present and envisioned investigations. Interdisciplinary applications will be highlighted.

Date and Time: 
Wednesday, March 14, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Temporal coding of volumetric imagery

Topic: 
Temporal coding of volumetric imagery
Abstract / Description: 

'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned or captured in parallel and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption.

This talk describes systems and methods with which to efficiently detect and visualize image volumes by temporally encoding the extra dimensions' information into 2D measurements or displays. Some highlights of my research include video and 3D recovery from photographs, and true-3D augmented reality image display by time multiplexing. In the talk, I show how temporal optical coding can improve system performance, battery life, and hardware simplicity for a variety of platforms and applications.

Date and Time: 
Wednesday, March 7, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: ChromaBlur: Rendering Chromatic Eye Aberration Improves Accommodation and Realism

Topic: 
ChromaBlur: Rendering Chromatic Eye Aberration Improves Accommodation and Realism
Abstract / Description: 

Computer-graphics engineers and vision scientists want to generate images that reproduce realistic depth-dependent blur. Current rendering algorithms take into account scene geometry, aperture size, and focal distance, and they produce photorealistic imagery as with a high-quality camera. But to create immersive experiences, rendering algorithms should aim instead for perceptual realism. In so doing, they should take into account the significant optical aberrations of the human eye. We developed a method that, by incorporating some of those aberrations, yields displayed images that produce retinal images much closer to the ones that occur in natural viewing. In particular, we create displayed images taking the eye's chromatic aberration into account. This produces different chromatic effects in the retinal image for objects farther or nearer than current focus. We call the method ChromaBlur. We conducted two experiments that illustrate the benefits of ChromaBlur. One showed that accommodation (eye focusing) is driven quite effectively when ChromaBlur is used and that accommodation is not driven at all when conventional methods are used. The second showed that perceived depth and realism are greater with imagery created by ChromaBlur than in imagery created conventionally. ChromaBlur can be coupled with focus-adjustable lenses and gaze tracking to reproduce the natural relationship between accommodation and blur in HMDs and other immersive devices. It can thereby minimize the adverse effects of vergence-accommodation conflicts.

Date and Time: 
Wednesday, February 28, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Data-driven Computational Imaging

Topic: 
Data-driven Computational Imaging
Abstract / Description: 

Between ever increasing pixel counts, ever cheaper sensors, and the ever expanding world-wide-web, natural image data has become plentiful. These vast quantities of data, be they high frame rate videos or huge curated datasets like Imagenet, stand to substantially improve the performance and capabilities of computational imaging systems. However, using this data efficiently presents its own unique set of challenges. In this talk I will use data to develop better priors, improve reconstructions, and enable new capabilities for computational imaging systems.

Date and Time: 
Wednesday, February 21, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Accelerated Computing for Light Field and Holographic Displays

Topic: 
Accelerated Computing for Light Field and Holographic Displays
Abstract / Description: 

In this talk, I will present two recently published papers at the annual SIGGRAPH ASIA 2017. For the first paper, we present a 4D light field sampling and rendering system for light field displays that can support both foveation and accommodation to reduce rendering cost while maintaining perceptual quality and comfort. For the second paper, we present a light field based Computer Generated Holography (CGH) rendering pipeline allowing for reproduction of high-definition 3D scenes with continuous depth and support of intra-pupil view dependent occlusion using computer generated hologram. Our rendering and Fresnel integral accurately accounts for diffraction and supports various types of reference illumination for holograms.

Date and Time: 
Wednesday, February 14, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Street View 2018 - The Newest Generation of Mapping Hardware

Topic: 
Street View 2018 - The Newest Generation of Mapping Hardware
Abstract / Description: 

A brief overview of Street View from it's inception 10 years ago until now will be presented. Street level Imagery has been the prime objective for Google's Street View in the past, and has now migrated into a state-of-the-art mapping platform. Challenges and solutions to the design and fabrication of the imaging system and optimization of hardware to align with specific software post processing will be discussed. Real world challenges of fielding hardware in 80+ countries will also be addressed.

Date and Time: 
Wednesday, February 7, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Learning where to look in 360 environments

Topic: 
Learning where to look in 360 environments
Abstract / Description: 

Many vision tasks require not just categorizing a well-composed human-taken photo, but also intelligently deciding "where to look" in order to get a meaningful observation in the first place. We explore how an agent can anticipate the visual effects of its actions, and develop policies for learning to look around actively---both for the sake of a specific recognition task as well as for generic exploratory behavior. In addition, we examine how a system can learn from unlabeled video to mimic human videographer tendencies, automatically deciding where to look in unedited 360 degree panoramas. Finally, to facilitate 360 video processing, we introduce spherical convolution, which allows application of off-the-shelf deep networks and object detectors to 360 imagery.

Date and Time: 
Wednesday, January 24, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Driverless Anything and the Role of LiDAR

Topic: 
Driverless Anything and the Role of LiDAR
Abstract / Description: 

LiDAR, or light detection and ranging, is a versatile light-based remote sensing technology that has been the subject of a great deal of attention in recent times. It has shown up in a number of media venues, and has even led to public debate about engineering choices of a well-known electric car company, Tesla Motors. During this talk the speaker will provide some background on LiDAR and discuss why it is a key link to the future autonomous vehicle ecosystem as well as its strong connection to power electronics technologies.

Date and Time: 
Wednesday, January 17, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Advancing Healthcare with AI and VR

Topic: 
Advancing Healthcare with AI and VR
Abstract / Description: 

Quality, cost, and accessibility form an iron triangle that has prevented healthcare from achieving accelerated advancement in the last few decades. Improving any one of the three metrics may lead to degradation of the other two. However, thanks to recent breakthroughs in artificial intelligence (AI) and virtual reality (VR), this iron triangle can finally be shattered. In this talk, I will share the experience of developing DeepQ, an AI platform for AI-assisted diagnosis and VR-facilitated surgery. I will present three healthcare initiatives we have undertaken since 2012: Healthbox, Tricorder, and VR surgery, and explain how AI and VR play pivotal roles in improving diagnosis accuracy and treatment effectiveness. And more specifically, how we have dealt with not only big data analytics, but also small data learning, which is typical in the medical domain. The talk concludes with roadmaps and a list of open research issues in signal processing and AI to achieve precision medicine and surgery.

Date and Time: 
Wednesday, January 10, 2018 - 4:30pm
Venue: 
Packard 101

Pages

Subscribe to RSS - SCIEN Talk