Talks

Details view List view

Image Denoising

The New Era of Image Denoising
April 20, 2023
Paris, France

Image denoising is one of the oldest and most studied problems in image processing. An extensive work over several decades has led to thousands of papers on this subject, and to many well-performing algorithms for this task. As expected, the era of deep learning has brought yet another revolution to this subfield, and took the lead in today’s ability for noise suppression in images. This talk focuses on recently discovered abilities and opportunities of image denoisers. We expose the possibility of using image denoisers for serving other problems, such as regularizing general inverse problems and serving as the engine for image synthesis. We also unveil the (strange?) idea that denoising and other inverse problems might not have a unique solution, as common algorithms would have you believe. Instead, we describe constructive ways to produce randomized and diverse high perceptual quality results for inverse problems.

This is an invited talk, given in the “A Multiscale tour of Harmonic Analysis and Machine Learning” event, on April 19-21, Celebrating Stéphane Mallat’s 60th birthday.

Recordings of all the talks in this event can be found here.

The New Era of Image Denoising - The Deep Learning Revolution and Beyond
April 18, 2022
Lanzhou University, Lanzhou, P.R. China

Part A: Image denoising – removal of white additive Gaussian noise from an image – is one of the oldest and most studied problems in image processing. An extensive work over several decades has led to thousands of papers on this subject, and to many well-performing algorithms for this task. As expected, the era of deep learning has brought yet another revolution to this subfield, and took the lead in today’s ability for noise suppression in images. All this progress has led some researchers to believe that “Denoising Is Dead”, in the sense that all that can be achieved is already done. Part A of this talk we will introduce the above evolution of this field, and highlight the tension that exists between classical approaches and modern AI alternatives.

Part B: Part B of this talk will focus on recently discovered abilities and vulnerabilities of image denoisers. In a nut-shell, we expose the possibility of using image denoisers for serving other problems, such as regularizing general inverse problems and serving as the engine for image synthesis. We also unveil the (strange?) idea that denoising (and other inverse problems) might not have a unique solution, as common algorithms would have you believe. Instead, we will describe constructive ways to produce randomized and diverse high perceptual quality results for inverse problems.

This was given as a plenary talk in the third international workshop on matrix computations, commemorating the 90th birthday of Gene Golub.

Image Denoising - Not What You Think
September 10, 2021
Invited talk- Berkeley, Rice, IMVC

Image denoising – removal of white additive Gaussian noise from an image – is one of the oldest and most studied problems in image processing. An extensive work over several decades has led to thousands of papers on this subject, and to many well-performing algorithms for this task. As expected, the era of deep learning has brought yet another revolution to this subfield, and took the lead in today’s ability for noise suppression in images. All this progress has led some researchers to believe that “denoising is dead”, in the sense that all that can be achieved is already done.

Exciting as all this story might be, this talk IS NOT ABOUT IT!

Our story focuses on recently discovered abilities and vulnerabilities of image denoisers. In a nut-shell, we expose the possibility of using image denoisers for serving other problems, such as regularizing general inverse problems and serving as the engine for image synthesis. We also unveil the (strange?) idea that denoising (and other inverse problems) might not have a unique solution, as common algorithms would have you believe. Instead, we will describe constructive ways to produce randomized and diverse high perceptual quality results for inverse problems.

A recording of this talk can be found HERE.

This talk was also given in the TCE-MLIS event on February 24th. Here is a recording of this talk (in Hebrew!)

 

Image Denoising - Not What You Think
July 13, 2021.
IEEE Statistical Signal Processing Workshop 2021 - Rio De Janeiro (Virtual) (Keynote Talk)

Image denoising – removal of white additive Gaussian noise from an image – is one of the oldest and most studied problems in image processing. An extensive work over several decades has led to thousands of papers on this subject, and to many well-performing algorithms for this task. As expected, the era of deep learning has brought yet another revolution to this subfield, and took the lead in today’s ability for noise suppression in images. All this progress has led some researchers to believe that “denoising is dead”, in the sense that all that can be achieved is already done.

Exciting as all this story might be, this talk IS NOT ABOUT it!

Our story focuses on recently discovered abilities and vulnerabilities of image denoisers. In a nut-shell, we expose the possibility of using image denoisers for serving other problems, such as regularizing general inverse problems and serving as the engine for image synthesis. We also unveil the (strange?) idea that denoising might not have a unique solution, as common algorithms would have you believe. Instead, we’ll describe constructive ways to produce randomized and diverse high perceptual quality denoising results.

A shorter version of this talk was given in June 17th as an Invited Talk in a conference on AI organized by RAFAEL.
למידה עמוקה - המהפיכה שתשנה את חיינו
March 22, 2021
צה"ל

למידה עמוקה (deep learning)  הוא תחום שישנה את חיינו – תיקון – הוא כבר משנה את חיינו. בהרצאה זו שניתנה לגורמי צה”ל, אני מספר את סיפורו המרתק של התחום הזה ואת התהפוכות שהוא עבר בששים השנים האחרונות. הרצאה זו נועדה לקהל הרחב ולא דורשת כל ידע מוקדם. זוהי גירסה ארוכה יותר של הרצאה דומה שניתנה ב- 2019 במסיבת היובל לפקולטה למדעי המחשב.

Design of Deep Learning Architectures
February 4, 2020.
Google Mountain-View - Computational Imaging Workshop (Keynote Talk)

How do we choose a network architecture in deep-learning solutions? By copying existing networks or guessing new ones, and sometimes by applying various small modifications to them via trial and error. This non-elegant and brute-force strategy has proven itself useful for a wide variety of imaging tasks. However, it comes with a painful cost – our networks tend to be quite heavy and cumbersome. Could we do better? In this talk we would like to propose a different point of view towards this important question, by advocating the following two rules: (i) Rather than “guessing” architectures, we should rely on classic signal and image processing concepts and algorithms, and turn these to networks to be learned in a supervised manner. More specifically, (ii) Sparse representation modeling is key in many (if not all) of the successful architectures that we are using. I will demonstrate these claims by presenting three recent image denoising networks that are light-weight and yet quite effective, as they follow the above guidelines.

Joint work with Peyman Milanfar.
Regularization by Denoising (RED)
May 19th, 2017
Weizmann Institute

Image denoising is the most fundamental problem in image enhancement, and it is largely solved: It has reached impressive heights in performance and quality — almost as good as it can ever get. But interestingly, it turns out that we can solve many other problems using the image denoising “engine”. I will describe the Regularization by Denoising (RED) framework: using the denoising engine in defining the regularization of any inverse problem. The idea is to define an explicit image-adaptive regularization functional directly using a high performance denoiser. Surprisingly, the resulting regularizer is guaranteed to be convex, and the overall objective functional is explicit, clear and well-defined. With complete flexibility to choose the iterative optimization procedure for minimizing this functional, RED is capable of incorporating any image denoising algorithm as a regularizer, treat general inverse problems very effectively, and is guaranteed to converge to the globally optimal result.

This talk was given at the computer vision seminar in Weizmann Institute . This is a joint work with Peyman Milanfar (Google Research) and Yaniv Romano (EE-Technion).
SOS Boosting of Image Denoising Algorithms
January 25-30, 2015
Villars sur Ollon, Switzerland

In this talk we present a generic recursive algorithm for improving image denoising methods. Given the initial denoised image, we suggest repeating the following procedure: (i) Strengthen the signal by adding the previous denoised image to the degraded input image, (ii) Operate the denoising method on the strengthened image, and (iii) Subtract the previous denoised image from the restored signals strengthened outcome. The convergence of this process is studied for the K-SVD image denoising and related algorithms. Furthermore, still in the context of K-SVD image denoising, we introduce an interesting interpretation of the SOS algorithm as a technique for closing the gap between the local patch-modeling and the global restoration task, thereby leading to improved performance. We demonstrate the SOS boosting algorithm for several leading denoising methods (KSVD, NLM, BM3D, and EPLL), showing tendency to further improve denoising performance.

This is a joint work with Yaniv Romano (EE-department, Technion). This talk was given as an invited talk in BASP Froentiers.
Wavelet for Graphs and its Deployment to Image Processing
May 12-14, 2014
SIAM Imaging Science, in Hong-Kong.

What if we take all the overlapping patches from a given image and organize them to create the shortest path by using their mutual Euclidean distances? This suggests a reordering of the image pixels in a way that creates a maximal 1D regularity. What could we do with such a construction? In this talk we consider a wider perspective of the above, and introduce a wavelet transform for graph-structured data. The proposed transform is based on a 1D wavelet decomposition coupled with a pre-reordering of the input so as to best sparsify the given data. We adopt this transform to image processing tasks by considering the image as a graph, where every patch is a node, and edges are obtained by Euclidean distances between corresponding patches. We show several ways to use the above ideas in practice, leading to state-of-the-art image denoising, deblurring, inpainting, and face-image compression results.

This is a joint work with Idan Ram and Israel Cohen. This talk was given as a plenary talk in SIAM Imaging Science, in Hong-Kong.
Image Processing via Pixel Permutation
April 1st, 2014
Israel Machine Vision Conference (IMVC), in Tel-Aviv, Israel

Images are 2D signals, and should be processed as such — this is the common belief in the image processing community. Is it truly the case? Around thirty years ago, some researchers suggested to convert images into 1D signals, so as to harness well-developed 1D tools such as adaptive-filtering and Kalman- estimation techniques. These attempts resulted with poorly performing algorithms, strengthening the above belief. Why should we force unnatural causality between spatially ordered pixels? Indeed, why? In this talk I will present a conversion of images into 1D signals that leads to state-of-the-art results in series of applications – denoising, inpainting, compression, and more. The core idea in our work is that there exists a permutation of the image pixels that carries in it most of the “spatial content”, and this ordering is within reach, even if the image is corrupted. We expose this permutation and use it in order to process the image as if it is a one-dimensional signal, treating successfully a series of image processing problems.

This is a joint work with Idan Ram and Israel Cohen. This talk was given as a plenary talk in the Israel Machine Vision Conference (IMVC)
Image Denoising and Beyond via Learned Dictionaries and Sparse Representations
June 26th, 2008
Tel-Aviv University, Approximation Seminar, the Mathemathics department.

In this survey talk we focus on the use of sparse and redundant representations and learned dictionaries for image denoising and other related problems. We discuss the the K-SVD algorithm for learning a dictionary that describes the image content effectively. We then show how to harness this algorithm for image denoising, by working on small patches and forcing sparsity over the trained dictionary. The above is extended to color image denoising and inpainitng, video denoising, and facial image compression, leading in all these cases to state of the art results. We conclude with very recent results on the use of several sparse representations for getting better denoising performance. An algorithm to generate such set of representations is developed, and our analysis shows that by this method we approximate the minimum-mean-squared-error (MMSE) estimator, thus getting better results.

This talk survesy a wide group of papers, with statement of recent results obtained with Irad Yavneh.
Denoising and Beyond via Learned Dictionaires and Sparse Representations
December 17th, 2006
Israel Computer Vision Day, 2006. Interdisciplinary center, Herzlia.

In this talk we consider several inverse problems in image processing, using sparse and redundant representations over trained dictionaries. Using the K-SVD algorithm, we obtain a dictionary that describes the image content effectively. Two training options are considered: using the corrupted image itself, or training on a corpus of high-quality image database. Since the K-SVD is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm for gray-level images with state-of-the-art denoising performance. We then extend these results to color images, handling their denoising, inpainting, and demosaicing. Following the above ideas, with necessary modifications to avoid color artifacts and over-fitting, we present stat-of-the art results in each of these applications. Another extension considered is video denoising — we demonstrate how the above method can be extended to work with 3D patches, propagate the dictionary from one frame to another, and get both improved denoising performance while also reducing substantially the computational load per pixel.

Joint work with Michal Aharon and Matan Protter - the CS department, the Technion, Julien Mairal and Guilermo Sapiro, ECE department, University of Minnesota, Minneapolis, USA.
Image Denoising via Learned Dictionaries and Sparse Representations
June 21st, 2006
IEEE Comnputer Society Conference on Computer Vision and Pattern Recognition (CVPR).

We address the image denoising problem, where zero mean white and homogeneous Gaussian additive noise should be removed from a given image. The approach taken is based on sparse and redundant representations over a trained dictionary. The proposed algorithm denoises the image, while simultaneously training a dictionary on its (corrupted) content using the K-SVD algorithm. As the dictionary training algorithm is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm, with state-of-the-art performance, equivalent and sometimes surpassing recently published leading alternative denoising methods.

Joint work with Michal Aharaon.
Shrinkage for Redundant Representations?
November 17th, 2005
SPARS05', IRISA - INRIA, Rennes, France.

Shrinkage is a well known and appealing denoising technique. The use of shrinkage is known to be optimal for Gaussian white noise, provided that the sparsity on the signal’s representation is enforced using a unitary transform. Still, shrinkage is also practiced successfully with non-unitary, and even redundant representations. In this lecture we shed some light on this behavior. We show that simple shrinkage could be interpreted as the first iteration of an algorithm that solves the basis pursuit denoising (BPDN) problem. Thus, this work leads to a sequential shrinkage algorithm that can be considered as a novel and effective pursuit method. We demonstrate this algorithm, both synthetically, and for the image denoising problem, showing in both cases its superiority over several popular alternatives..

Retinex By Two Bilateral Filters
April 9th, 2005
The 5th International Conference on Scale-Space and PDE Methods in Computer Vision, Hofgeismar, Germany

Retinex theory deals with the removal of unfavorable illumination effects from images. This ill-posed inverse problem is typically regularized by forcing spatial smoothness on the recoverable illumination. Recent work in this field suggested exploiting the knowledge that the illumination image bounds the image from above, and the fact that the reflectance is also expected to be smooth.

In this lecture we show how the above model can be improved to provide a non-iterative retinex algorithm that handles better edges in the illumination, and suppresses noise in dark areas. This algorithm uses two specially tailored bilateral filters — the first evaluates the illumination and the other is used for the computation of the reflectance. This result stands as a theoretic justification and refinement for the recently proposed heuristic use of the bilateral filter for retinex by Durand and Dorsey. In line with their appealing way of speeding up the bilateral filter, we show that similar speedup methods apply to our algorithm.

On the Bilateral Filter and Ways to Improve It
May 6th, 2002
SCCM (Scientific Computing and Computation Mathematics Program) Seminar

Additive noise removal from a given signal (also known as de-noising) is an important stage in many applications in signal processing. Various approaches have been proposed throughout the years. This talk focuses on Bayesian smoothing and edge-preserving methods. Classical algorithms in this family are typically based on Weighted Least Squares (WLS), Robust Estimation (RE), and Anisotropic Diffusion (AD). These methods share common features such as adaptivity to the data, formation as optimization problems, and the need for iterative-based restoration. In 1998 Tomasi and Manduchi (CS, Stanford) proposed an alternative heuristic non-iterative filter for noise removal called the bilateral filter. It was shown to give similar and possibly better results compared to the abovementioned iterative approaches.

However, the bilateral filter was proposed as an intuitive tool without theoretical connection to the classical approaches. In this talk the various noise-removal techniques discussed here (WLS, RE, AD, and the bilateral filter) are presented and related theoretically to each other. In particular, it is shown that RE (and AD) could be interpreted as WLS with weights replaced after each iteration. Also, it is shown that the bilateral filter emerges from the Bayesian approach as a single iteration of the Jacobi iterative algorithm for a properly posed smoothness penalty. Based on this observation, it is shown how this new filter can be improved and extended to treat more general reconstruction problems.