In this talk, I will present a novel framework of applying deep neural network (DNN) to the universal discrete denoising problem. DNN has recently shown remarkable performance improvements in diverse applications, and most of the success are based on the supervised learning framework. While successful in many applications, it is not straightforward to apply such framework to the universal discrete denoising problem, in which a denoiser tries to estimate an unknown finite-valued clean data based on its noisy observation. The reason is because the ground-truth label for a denoiser is the clean data subject to the estimation and is clearly not available for training a denoiser. In this work, I follow the framework of DUDE (Discrete Universal DEnoiser) and devise a novel way of training a DNN as a discrete denoiser solely based on the given noisy observation. The key idea is to utilize an unbiased estimate of the true loss of a denoiser and define a novel objective function for DNN based on the "pseudo-labels". The resulting scheme is dubbed as Neural DUDE, and the experimental results show that Neural DUDE significantly outperforms the original DUDE, which is the state-of-the-art on several discrete denoising problems. Furthermore, we show that Neural DUDE overcomes the critical limitation of DUDE, namely, it is much more robust to the choice of the hyper-parameter and has a concrete way of choosing the best hyper-parameter for given data. Such property makes Neural DUDE an attractive choice in practice. Finally, I will conclude with some potential future research directions, such as extending the framework to the denoising of continuous-valued data.
The Information Theory Forum (IT-Forum) at Stanford ISL is an interdisciplinary academic forum which focuses on mathematical aspects of information processing. With a primary emphasis on information theory, we also welcome researchers from signal processing, learning and statistical inference, control and optimization to deliver talks at our forum. We also warmly welcome industrial affiliates in the above fields. The forum is typically held in Packard 202 every Friday at 1:00 pm during the academic year.
The Information Theory Forum is organized by graduate students Jiantao Jiao and Yanjun Han. To suggest speakers, please contact any of the students.
Taesup Moon received the B.S. degree from Seoul National University, Seoul, Korea, in 2002 and the M.S. and Ph.D. degrees from Stanford University, Stanford, CA, USA, in 2004 and 2008, respectively, all in Electrical Engineering. From 2008 to 2012, he was a Research Scientist with Yahoo! Labs, Sunnyvale, CA, USA, and he held a Postdoctoral Researcher appointment with the Department of Statistics, UC Berkeley, Berkeley, CA, USA, from 2012 to 2013. From 2013 to 2015, he was a Research Staff Member with Samsung Advanced Institute of Technology (SAIT), Samsung Electronics, Inc., Suwon, Korea. Since September 2015, he has been an Assistant Professor with the Department of Information and Communication Engineering, Daegu-Gyeongbuk Institute of Science and Technology (DGIST), Daegu, Korea.
He is a recipient of Samsung Scholarship. His research interests include diverse areas such as statistical machine learning (deep learning), information theory, signal processing, large-scale optimization, information retrieval, speech recognition, and remote sensing.