Talks
Image Synthesis
Image Synthesis
Image denoising is one of the oldest and most studied problems in image processing. An extensive work over several decades has led to thousands of papers on this subject, and to many well-performing algorithms for this task. As expected, the era of deep learning has brought yet another revolution to this subfield, and took the lead in today’s ability for noise suppression in images. This talk focuses on recently discovered abilities and opportunities of image denoisers. We expose the possibility of using image denoisers for serving other problems, such as regularizing general inverse problems and serving as the engine for image synthesis. We also unveil the (strange?) idea that denoising and other inverse problems might not have a unique solution, as common algorithms would have you believe. Instead, we describe constructive ways to produce randomized and diverse high perceptual quality results for inverse problems.
This is an invited talk, given in the “A Multiscale tour of Harmonic Analysis and Machine Learning” event, on April 19-21, Celebrating Stéphane Mallat’s 60th birthday.
Recordings of all the talks in this event can be found here.
Part A: Image denoising – removal of white additive Gaussian noise from an image – is one of the oldest and most studied problems in image processing. An extensive work over several decades has led to thousands of papers on this subject, and to many well-performing algorithms for this task. As expected, the era of deep learning has brought yet another revolution to this subfield, and took the lead in today’s ability for noise suppression in images. All this progress has led some researchers to believe that “Denoising Is Dead”, in the sense that all that can be achieved is already done. Part A of this talk we will introduce the above evolution of this field, and highlight the tension that exists between classical approaches and modern AI alternatives.
Part B: Part B of this talk will focus on recently discovered abilities and vulnerabilities of image denoisers. In a nut-shell, we expose the possibility of using image denoisers for serving other problems, such as regularizing general inverse problems and serving as the engine for image synthesis. We also unveil the (strange?) idea that denoising (and other inverse problems) might not have a unique solution, as common algorithms would have you believe. Instead, we will describe constructive ways to produce randomized and diverse high perceptual quality results for inverse problems.
This was given as a plenary talk in the third international workshop on matrix computations, commemorating the 90th birthday of Gene Golub.
Image denoising – removal of white additive Gaussian noise from an image – is one of the oldest and most studied problems in image processing. An extensive work over several decades has led to thousands of papers on this subject, and to many well-performing algorithms for this task. As expected, the era of deep learning has brought yet another revolution to this subfield, and took the lead in today’s ability for noise suppression in images. All this progress has led some researchers to believe that “denoising is dead”, in the sense that all that can be achieved is already done.
Exciting as all this story might be, this talk IS NOT ABOUT IT!
Our story focuses on recently discovered abilities and vulnerabilities of image denoisers. In a nut-shell, we expose the possibility of using image denoisers for serving other problems, such as regularizing general inverse problems and serving as the engine for image synthesis. We also unveil the (strange?) idea that denoising (and other inverse problems) might not have a unique solution, as common algorithms would have you believe. Instead, we will describe constructive ways to produce randomized and diverse high perceptual quality results for inverse problems.
A recording of this talk can be found HERE.
This talk was also given in the TCE-MLIS event on February 24th. Here is a recording of this talk (in Hebrew!)
Image denoising – removal of white additive Gaussian noise from an image – is one of the oldest and most studied problems in image processing. An extensive work over several decades has led to thousands of papers on this subject, and to many well-performing algorithms for this task. As expected, the era of deep learning has brought yet another revolution to this subfield, and took the lead in today’s ability for noise suppression in images. All this progress has led some researchers to believe that “denoising is dead”, in the sense that all that can be achieved is already done.
Exciting as all this story might be, this talk IS NOT ABOUT it!
Our story focuses on recently discovered abilities and vulnerabilities of image denoisers. In a nut-shell, we expose the possibility of using image denoisers for serving other problems, such as regularizing general inverse problems and serving as the engine for image synthesis. We also unveil the (strange?) idea that denoising might not have a unique solution, as common algorithms would have you believe. Instead, we’ll describe constructive ways to produce randomized and diverse high perceptual quality denoising results.
Style-transfer is a process of migrating a style from a given image to the content of another, synthesizing a new image which is an artistic mixture of the two. Recent work on this problem adopting Convolutional Neural-networks (CNN) ignited a renewed interest in this field, due to the very impressive results obtained. There exists an alternative path towards handling the style-transfer task, via generalization of texture-synthesis algorithms. I will present a novel such style-transfer algorithm that extends the texture-synthesis work of Kwatra et. al. (2005), while aiming to get stylized images that get closer in quality to the CNN ones.
In our field, when composing a super-resolved image, two ingredients contribute to the ability to get a leap in resolution: (i) the existence of many and diverse measurements, and (ii) the availability of a model to reliably describe the image to be produced. This second part, also known as the regularization or the prior, is of generic importance, and could be deployed to any inverse problem, and used by many other applications (compression, image synthesis, and more). The well-known recent work by Baker and Kanade (’02) and the work that followed (Lin and Shum ’04, Robinson and Milanfar ’05) all suggest that while the measurements are limited in gaining a resolution increase, the prior could be used to break this barrier. Clearly, the better the prior used, the higher the quality we can expect from the overall reconstruction procedure. Indeed, recent work on super-resolution (and other inverse problems) departs from the regular Tikhonov method, and tends to the robust counterparts, such as TV or the bilateral prior (see Farsiu et. al. ’04).
A recent trend with a growing popularity is the use of examples in defining the prior. Indeed, Baker and Kanade were the first to introduce this notion to the super-resolution task. There are several ways to use examples in shaping the prior to become better. The work by Mumford and Zhu (’99) and the follow-up contribution by Haber and Tenorio (02′) suggest a parametric approach. Baker and Kanade (’02), Freeman et. al. (several contributions ’01), Nakagaki and Katzaggelos (’03) all use the examples to directly learn the reconstruction function, by observing low-res. versus high-res. pairs.
In this talk we survey this line of work and show how it can be extended in several important ways. We show a general framework that builds an example-based prior that is independent of the inverse problem at hand, and we demonstrate it on several such problems, with promising results.