Denoising is a classical problem in signal processing and information theory, and various different methods have been applied to tackle the problem for several decades. Recently, supervised-trained neural network-based methods have achieved impressive denoising performances, significantly surpassing those of the classical approaches, such as prior- or optimization-based denoisers. However, there are two drawbacks on those methods; they are not adaptive, i.e., the neural- network cannot correct itself when distributional mismatch between training and test data exists, and they require clean source data and exact noise model for training, which is not always possible in some practical scenarios. In this talk, I will introduce a framework that tries to tackle above two drawbacks jointly, based on an unbiased estimate of the loss of a particular class of pixelwise context- adaptive denoisers. Using the framework and neural networks to learn the denoisers, I show the resulting image denoiser can adapt to mismatched distributions in the data solely based on the given noisy images, and achieve the state-of-the-art performances on several benchmark datasets. Moreover, combined with the standard noise transform/estimation techniques, I will show that our denoiser can be completely blindly trained only with the noisy images (and without exact noise model) and yet be very effective for denoising more sophisticated, source-dependent real-world noise, e.g., Poisson- Gaussian noise.
This is a joint work with my students, Sungmin Cha and Jaeseok Byun at SKKU.