SCIEN Talk

Designing and assessing near-eye displays to increase user inclusivity [SCIEN Talk]

Topic: 
Designing and assessing near-eye displays to increase user inclusivity
Abstract / Description: 

Recent years have seen impressive growth in near-eye display systems, which are the basis of most virtual and augmented reality experiences. There are, however, a unique set of challenges to designing a display system that is literally strapped to the user's face. With an estimated half of all adults in the United States requiring some level of visual correction, maximizing inclusivity for near-eye displays is essential. I will describe work that combines principles from optics, optometry, and visual perception to identify and address major limitations of near-eye displays both for users with normal vision and those that require common corrective lenses. I will also describe ongoing work assessing the potential for near-eye displays to assist people with less common visual impairments at performing day-to-day tasks.

Date and Time: 
Wednesday, January 11, 2017 - 4:30pm to 5:15pm
Venue: 
Packard 101

SCIEN Talk: Electronic augmentation of body functions

Topic: 
Electronic augmentation of body functions: progress in electro-neural interfaces
Abstract / Description: 

Electrical nature of neural signaling allows efficient bi-directional electrical communication with the nervous system. Currently, electro-neural interfaces are utilized for partial restoration of sensory functions, such as hearing and sight, actuation of prosthetic limbs and restoration of tactile sensitivity, enhancement of tear secretion, and many others. Deep brain stimulation helps controlling tremor in patients with Parkinson's disease, improve muscle control in dystonia, and in other neurological disorders. With technological advances and progress in understanding of the neural systems, these interfaces may allow not only restoration or augmentation of the lost functions, but also expansion of our natural capabilities – sensory, cognitive and others. I will review the state of the field and future directions of technological development.

Date and Time: 
Tuesday, December 6, 2016 - 4:30pm to 5:30pm
Venue: 
Packard 101

SCIEN Talk: Towards Socially-aware AI

Topic: 
Towards Socially-aware AI
Abstract / Description: 

Over the past sixty years, Intelligent Machines have made great progress in playing games, tagging images in isolation, and recently making decisions for self-driving vehicles. Despite these advancements, they are still far from making decisions in social scenes and effectively assisting humans in public spaces such as terminals, malls, campuses, or any crowded urban environment. To overcome these limitations, I claim that we need to empower machines with social intelligence, i.e., the ability to get along well with others and facilitate mutual cooperation. This is crucial to design future generations of smart spaces that adapt to the behavior of humans for efficiency, or develop autonomous machines that assist in crowded public spaces (e.g., delivery robots, or self-navigating segways).

In this talk, I will present my work towards socially-aware machines that can understand human social dynamics and learn to forecast them. First, I will highlight the machine vision techniques behind understanding the behavior of more than 100 million individuals captured by multi-modal cameras in urban spaces. I will show how to use sparsity promoting priors to extract meaningful information about human behavior. Second, I will introduce a new deep learning method to forecast human social behavior. The causality behind human behavior is an interplay between both observable and non-observable cues (e.g., intentions). For instance, when humans walk into crowded urban environments such as a busy train terminal, they obey a large number of (unwritten) common sense rules and comply with social conventions. They typically avoid crossing groups and keep a personal distance to their surrounding. I will present detailed insights on how to learn these interactions from millions of trajectories. I will describe a new recurrent neural network that can jointly reason on correlated sequences and forecast human trajectories in crowded scenes. It opens new avenues of research in learning the causalities behind the world we observe. I will conclude my talk by mentioning some ongoing work in applying these techniques to social robots, and the future generations of smart hospitals.

More Information: http://web.stanford.edu/~alahi/

Date and Time: 
Tuesday, November 29, 2016 - 4:30pm to 5:30pm
Venue: 
Packard 101

SCIEN Talk: Designing a smart wearable camera for blind and visually impaired people

Topic: 
Designing a smart wearable camera for blind and visually impaired people
Abstract / Description: 

Horus Technology was founded in July 2014 with the goal of creating a smart wearable camera for blind and visually impaired people featuring intelligent algorithms that could understand the environment around its user and describe it out loud. Two years later, Horus has a working prototype being tested by a number of blind people in Europe and North America. Harnessing the power of portable GPUs, stereo vision and deep learning algorithms, Horus can read texts in different languages, learn and recognize faces, objects and identify obstacles. Designing a wearable device, we had to face a number of challenges and difficult choices. We will describe our systems, our design choices for both software and hardware and we will end with a small demo of Horus capabilities.

Date and Time: 
Tuesday, November 15, 2016 - 4:30pm to 5:30pm
Venue: 
Packard 101

SCIEN Talk

Topic: 
Quantum dot-based image sensors for cutting-edge commercial multispectral cameras
Abstract / Description: 

This work presents the development of a quantum dot-based photosensitive film engineered to be integrated on standard CMOS process wafers. It enables the design of exceptionally high performance, reliable image sensors. Quantum dot solids absorb light much more rapidly than typical silicon-based photodiodes do, and with the ability to tune the effective material bandgap, quantum dot-based imagers enable higher quantum efficiency over extended spectral bands, both in the Visible and IR regions of the spectrum. Moreover, a quantum dot-based image sensor enables desirable functions such as ultra-small pixels with low crosstalk, high full well capacity, global shutter and wide dynamic range at a relatively low manufacturing cost. At InVisage, we have optimized the manufacturing process flow and are now able to produce high-end image sensors for both visible and NIR in quantity.


 

The Stanford Center for Image Systems Engineering (SCIEN) is a partnership between the Stanford School of Engineering and technology companies developing imaging systems for the enhancement of human communication.

Date and Time: 
Wednesday, November 9, 2016 - 4:30pm to 5:15pm
Venue: 
Packard 101

SCIEN Talk

Topic: 
Smart pixel imaging with computational arrays
Abstract / Description: 

This talk will review architectures for computational imaging arrays where algorithms and cameras are co-designed. The talk will focus on novel digital readout integrated circuits (DROICs) that achieve snapshot on-chip high dynamic range and object tracking where most commercial systems require a multiple exposure acquisition.


 

The Stanford Center for Image Systems Engineering (SCIEN) is a partnership between the Stanford School of Engineering and technology companies developing imaging systems for the enhancement of human communication.

Date and Time: 
Tuesday, November 1, 2016 - 4:30pm to 5:30pm
Venue: 
Packard 101

SCIEN Talk

Topic: 
Optical Probing for Analyzing Light Transport
Abstract / Description: 

Active illumination techniques enable self-driving cars to detect and avoid hazards, optical microscopes to see deep into volumetric specimens, and light stages to digitally capture the shape and appearance of subjects. These active techniques work by using controllable lights to emit structured illumination patterns into an environment, and sensors to detect and process the light reflected back in response. Although such techniques confer many unique imaging capabilities, they often require long acquisition and processing times, rely on predictive models for the way light interacts with a scene, and cease to function when exposed to bright ambient sunlight.

In this talk, we introduce a generalized form of active illumination—known as optical probing—that provides a user with unprecedented control over which light paths contribute to a photo. The key idea is to project a sequence of illumination patterns onto a scene, while simultaneously using a second sequence of mask patterns to physically block the light received at select sensor pixels. This all-optical technique enables RAW photos to be captured in which specific light paths are blocked, attenuated, or enhanced. We demonstrate experimental probing prototypes with the ability to (1) record live direct-only or indirect-only video streams of a scene, (2) capture the 3D shape of objects in the presence of complex transport properties and strong ambient illumination, and (3) overcome the multi-path interference problem associated with time-of-flight sensors.

Date and Time: 
Tuesday, October 18, 2016 - 4:30pm to 5:30pm
Venue: 
Packard 101

SCIEN Talk

Topic: 
The Soul of a New Camera: The design of Facebook's Surround Open Source 3D-360 video camera
Abstract / Description: 

Around a year ago we set out to create an open-source reference design for a 3D-360 camera. In nine months, we had designed and built the camera, and published the specs and code. Our team leveraged a series of maturing technologies in this effort. Advances and availability in sensor technology, 20+ of computer vision algorithm development, 3D printing, rapid design photo-typing and computation photography allowed our team to move extremely fast. We will delve into the roles each of these technologies played in the designing of the camera, giving an overview of the system components and discussing the tradeoffs made during the design process. The engineering complexities and technical elements of 360 stereoscopic video capture will be discussed as well. We will end with some demos of the system and its output.

Date and Time: 
Wednesday, October 12, 2016 - 4:30pm to 5:30pm
Venue: 
Packard 101

SCIEN Talk

Topic: 
Human-centric optical design: a key for next generation AR and VR optics
Abstract / Description: 

The ultimate wearable display is an information device that people can use all day. It should be as forgettable as a pair of glasses or a watch, but more useful than a smart phone. It should be small, light, low-power, high-resolution and have a large field of view (FOV). Oh, and one more thing, it should be able to switch from VR to AR.

These requirements pose challenges for hardware and, most importantly, optical design. In this talk, I will review existing AR and VR optical architectures and explain why it is difficult to create a small, light and high-resolution display that has a wide FOV. Because comfort is king, new optical designs for the next-generation AR and VR system should be guided by an understanding of the capabilities and limitations of the human visual system.

Date and Time: 
Wednesday, October 5, 2016 - 4:30pm to 5:30pm
Venue: 
Packard 101

Claude E. Shannon's 100th Birthday

Topic: 
Centennial year of the 'Father of the Information Age'
Abstract / Description: 

From UCLA Shannon Centennial Celebration website:

Claude Shannon was an American mathematician, electrical engineer, and cryptographer known as "the father of information theory". Shannon founded information theory and is perhaps equally well known for founding both digital computer and digital circuit design theory. Shannon also laid the foundations of cryptography and did basic work on code breaking and secure telecommunications.

 

Events taking place around the world are listed at IEEE Information Theory Society.

Date and Time: 
Saturday, April 30, 2016 - 12:00pm
Venue: 
N/A

Pages

Subscribe to RSS - SCIEN Talk