Professor Gordon Wetzstein and team use AI to revolutionize real-time holography.
"The big challenge has been that we don't have algorithms that are good enough to model all the physical aspects of how light propagates in a complex optical system such as AR eyeglasses," reports Gordon. "The algorithms we have at the moment are limited in two ways. They're computationally inefficient, so it takes too long to constantly update the images. And in practice, the images don't look that good."
Gordon says the new approach makes big advances on both real-time image generation and image quality. In heads-up comparisons, he says, the algorithms developed by their "Holonet" neural network generated clearer and more accurate 3-D images, on the spot, than the traditional holographic software.
That has big practical applications for virtual and augmented reality, well beyond the obvious arenas of gaming and virtual meetings. Real-time holography has tremendous potential for education, training, and remote work. An aircraft mechanic, for example, could learn by exploring the inside of a jet engine thousands of miles away, or a cardiac surgeon could practice a particularly challenging procedure.
In addition to professor Gordon Wetzstein, the system was created by Yifan Peng, a postdoctoral fellow in computer science; Suyeon Choi, an EE PhD candidate; Nitish Padmanaban, EE PhD '20; and Jonghyun Kim, a senior research scientist at Nvidia Corp.
Excerpted from: "Using AI to Revolutionize Real-Time Holography", August 17, 2020