SCIEN Talk

SCIEN Talk: Plenoptic Medical Cameras

Topic: 
Plenoptic Medical Cameras
Abstract / Description: 

Optical imaging probes like otoscopes and laryngoscopes are essential tools used by doctors to see deep into the human body. Until now, they have been crucially limited to two-dimensional (2D) views of tissue lesions in vivo that frequently jeopardize their diagnostic usefulness. Depth imaging is critically needed in medical diagnostics because most tissue lesions manifest themselves as abnormal 3D structural changes. In this talk, I will talk our recent effort to develop three-dimensional (3D) plenoptic imaging tool that revolutionizes diagnosis with unprecedented sensitivity and specificity in the images produced. Particularly, I will discuss two plenoptic medical cameras, a plenoptic otoscope and a plenoptic laryngoscope, and their applications for in-vivo imaging.

Date and Time: 
Wednesday, December 5, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Perceptual Modeling with Multimodal Sensing

Topic: 
Perceptual Modeling with Multimodal Sensing
Abstract / Description: 

The research of human perception has enabled many visual applications in computer graphics that efficiently utilize computation resources to deliver a high quality experience within the limitations of the hardware. Beyond vision, humans perceive their surrounding using variety of senses to build a mental model of the world and act upon it. This mental image is often incomplete or incorrect which may have safety implications. As we cannot directly see inside the head, we need to read indirect signals projected outside. In the first part of the talk I will show how perceptual modeling can be used to overcome and exploit limitations of one specific human sense - the vision. Then, I will describe how we can build sensors to observe other human interactions connected first with physical touch and then with eye gaze patterns. Finally, I will outline how such readings can be used to teach computers to understand human behavior, to predict and to provide assistance or safety.

Date and Time: 
Wednesday, November 28, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Photo Forensics from JPEG Coding Artifacts

Topic: 
Photo Forensics from JPEG Coding Artifacts
Abstract / Description: 

The past few years have seen a startling and troubling rise in the fake-news phenomena in which everyone from individuals to state-sponsored entities produce and distribute mis-information, which is then widely promoted and disseminated on social media. The implications of fake news range from a mis-informed public to an existential threat to democracy, and horrific violence. At the same time, recent and rapid advances in machine learning are making it easier than ever to create sophisticated and compelling fake images and videos, making the fake-news phenomena even more powerful and dangerous. I will start by providing a broad overview of the field of image and video forensics and then I will describe in detail a suite of image forensic techniques that explicitly detect inconsistencies in JPEG coding artifacts.

Date and Time: 
Wednesday, November 14, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Wavefront coding techniques and resolution limits for light field microscopy

Topic: 
Wavefront coding techniques and resolution limits for light field microscopy
Abstract / Description: 

Light field microscopy is a rapid, scan-less volume imaging technique that requires only a standard wide field fluorescence microscope and a microlens array. Unlike scanning microscopes, which collect volumetric information over time, the light field microscope captures volumes synchronously in a single photographic exposure, and at speeds limited only by the frame rate of the image sensor. This is made possible by the microlens array, which focuses light onto the camera sensor so that each position in the volume is mapped onto the sensor as a unique light intensity pattern. These intensity patterns are the position-dependent point response functions of the light field microscope. With prior knowledge of these point response functions, it is possible to "decode" 3-D information from a raw light field image and computationally reconstruct a full volume. In this talk I present an optical model for light field microscopy based on wave optics that accurately models light field point response functions. I describe an algorithm that solves for volumes using a GPU-accelerated iterative algorithm, and discuss priors that are useful for reconstructing biological specimens. I then explore the diffraction limit that applies for light field microscopy, and how it gives rise to a position-dependent resolution limits for this microscope. I'll explain how these limits differ from more familiar resolution metrics commonly used in 3-D scanning microscopy, like the Rayleigh limit and the optical transfer function (OTF). Using this theory of resolution limits for the light field microscope, I explore new wavefront coding techniques that can modify the light field resolution limits and can address certain common reconstruction artifacts, at least to a degree. Certain resolution trade-offs exist that suggest that light field microscopy is just one of potentially many useful forms of computational microscopy. Finally, I describe our application of light field microscopy in neuroscience where we have used it to record calcium activity in populations of neurons within the brains of awake, behaving animals.

Date and Time: 
Wednesday, October 31, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Is it real? Deep Neural Face Reconstruction and Rendering

Topic: 
Is it real? Deep Neural Face Reconstruction and Rendering
Abstract / Description: 

A broad range of applications in visual effects, computer animation, autonomous driving, and man-machine interaction heavily depend on robust and fast algorithms to obtain high-quality reconstructions of our physical world in terms of geometry, motion, reflectance, and illumination. Especially, with the increasing popularity of virtual, augmented and mixed reality devices, there comes a rising demand for real-time and low-latency solutions.

This talk covers data-parallel optimization and state-of-the-art machine learning techniques to tackle the underlying 3D and 4D reconstruction problems based on novel mathematical models and fast algorithms. The particular focus of this talk is on self-supervised face reconstruction from a collection of unlabeled in-the-wild images. The proposed approach can be trained end-to-end without dense annotations by fusing a convolutional encoder with a differentiable expert-designed renderer and a self-supervised training loss.

The resulting reconstructions are the foundation for advanced video editing effects, such as photo-realistic re-animation of portrait videos. The core of the proposed approach is a generative rendering-to-video translation network that takes computer graphics renderings as input and generates photo-realistic modified target videos that mimic the source content. With the ability to freely control the underlying parametric face model, we are able to demonstrate a large variety of video rewrite applications. For instance, we can reenact the full head using interactive user-controlled editing and realize high-fidelity visual dubbing.

Date and Time: 
Wednesday, October 24, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: Computational microscopy of dynamic order across biological scales

Topic: 
Computational microscopy of dynamic order across biological scales
Abstract / Description: 

Living systems are characterized by emergent behavior of ordered components. Imaging technologies that reveal dynamic arrangement of organelles in a cell and of cells in a tissue are needed to understand the emergent behavior of living systems. I will present an overview of challenges in imaging dynamic order at the scales of cells and tissue, and discuss advances in computational label-free microscopy to overcome these challenges.

 

Date and Time: 
Wednesday, October 17, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk: How to train neural networks on LiDAR point clouds

Topic: 
How to train neural networks on LiDAR point clouds
Abstract / Description: 

Accurate LiDAR classification and segmentation is required for developing critical ADAS & Autonomous Vehicles components. Mainly, its required for high definition mapping and developing perception and path/motion planning algorithms. This talk will cover best practices for how to accurately annotate and benchmark your AV/ADAS models against LiDAR point cloud ground truth training data.

 

Date and Time: 
Wednesday, October 10, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN & EE 292E: The challenge of large-scale brain imaging

Topic: 
The challenge of large-scale brain imaging
Abstract / Description: 

Advanced optical microscopy techniques have enabled the recording and stimulation of large populations of neurons deep within living, intact animal brains. I will present a broad overview of these techniques, and discuss challenges that still remain in performing large-scale imaging with high spatio-temporal resolution, along with various strategies that are being adopted to address these challenges.

Date and Time: 
Wednesday, October 3, 2018 - 4:30pm
Venue: 
Packard 101

SCIEN Talk, eWear seminar: 'Immersive Technology and AI' with focus on mobile AR research

Topic: 
'Immersive Technology and AI' with focus on mobile AR research
Abstract / Description: 

Talk Title: Saliency in VR: How Do People Explore Virtual Environments,presented by Vincent Sitzmann

Understanding how people explore immersive virtual environments is crucial for many applications, such as designing virtual reality (VR) content, developing new compression algorithms, or learning computational models of saliency or visual attention. Whereas a body of recent work has focused on modeling saliency in desktop viewing conditions, VR is very different from these conditions in that viewing behavior is governed by stereoscopic vision and by the complex interaction of head orientation, gaze, and other kinematic constraints. To further our understanding of viewing behavior and saliency in VR, we capture and analyze gaze and head orientation data of 169 users exploring stereoscopic, static omni-directional panoramas, for a total of 1980 head and gaze trajectories for three different viewing conditions. We provide a thorough analysis of our data, which leads to several important insights, such as the existence of a particular fixation bias, which we then use to adapt existing saliency predictors to immersive VR conditions. In addition, we explore other applications of our data and analysis, including automatic alignment of VR video cuts, panorama thumbnails, panorama video synopsis, and saliency-based compression.

Talk Title: "Immersive Technology and AI" with focus on mobile AR research

Abstract: not available

 

Date and Time: 
Thursday, May 31, 2018 - 3:30pm
Venue: 
Spilker 232

Pages

Subscribe to RSS - SCIEN Talk