This area combines advanced computational and algorithmic solutions with next-generation hardware and systems to unlock new paradigms in sensing, imaging, and displays. Applications span AR/VR, machine perception for autonomy, remote sensing (of Earth, space, and oceans), biomedical systems and imaging, and multimedia systems. The information sources being processed or fused include images, video, audio, 3D images and point clouds, electromagnetic (from MHz to THz), mm-Wave radar, ultrasound, biomedical, multimedia and others. The techniques draw from computational imaging, array processing, sensor fusion methods, synthetic aperture systems, coherent processing, computed tomography, and often combine machine-learning and data-driven approaches with physics-/model-based solutions to obtain new insights and capabilities.
In addition to new signal processing and computational techniques, this area also explores next-generation hardware systems to enable novel sensing, perception, and display solutions. Systems include new optical display and projection systems, in-sensor processing for optical imaging, novel optical sensor systems, multi-/hybrid-physics approaches that combine RF/acoustics/optical techniques for imaging, photo-acoustics, large-aperture distributed mm-wave imaging systems, sensor fusion hardware platforms that combine vision with RF/mm-wave signals, and new airborne ultrasound systems.
AR/VR/MR systems engineering at the systems level, leveraging the end-to-end design of display and imaging optics and algorithms, taking human factors and applied vision science into consideration. Examples include novel optics design, eye and face tracking systems, neural and perceptually based rendering algorithm development, AI on the (sensor) edge, avatar capture, modeling & rendering, SLAM and tracking, sensor fusion, and applied vision science.