Software

Details view List view
LIDIA: Image Denoising via Deep Learning

There are numerous ways to denoise and image, and the most effective methods are nowadays based on deep learning and supervised training. This work of ours (with Grisha Vaksman and Peyman Milanfar) proposes a specific architecture that resembles BM3D with several added values. An interesting twist in our work is the ability to take the universally trained network and adapt it to an incoming image, boosting this way the denoising performance quite substantially.

DeepRED

Deep Image Prior (DIP) offers a new approach towards the regularization of inverse problems, obtained by forcing the recovered image to be synthesized from a given deep architecture. While DIP has been shown to be quite an effective unsupervised approach, its results still fall short when compared to state-of-the-art alternatives. In our work we boost DIP by adding an explicit prior based on Regularization by Denoising (RED), which leverages existing denoisers for regularizing inverse problems. This software package reproduces the results we report in our DeepRED paper.

Deep K-SVD Denoising

K-SVD denoising is a well-known algorithm, based on local sparsity modeling of image patches. Concieved in 2006, this algorithm was based on dictionary learning, achieveing (at that time) state-of-the-art performance. Over the years, better methods appeared, slowly and gradually shadowing this algorithm and pushing it to the back seats of image processing. With the entrance of supervised deep-learning denoising methods, this trend further strengthened. In our recent paper (co-authored by Meyer Scetbon, Peyman Milanfar and myself), we bring new life to the K-SVD denoising algorithm, by unfolding it to a network and training it end-to-end. Beyond the substantial improvement in performance, this result poses intruiging thoughts about how deep network architectures should be created, how classical image processinng algorithms should influence this, and more.

Deep Energy

The success of deep learning has been due, in no small part, to the availability of large annotated data-sets. Thus, a major bottleneck in current learning pipelines is the time consuming human annotation of data. In scenarios where such input-output pairs cannot be collected, simulation is often used instead, leading to a domain-shift between synthesized and real world data. Our recent work (by Alona Golts, Daniel Freedman and me) offers an unsupervised alternative that relies on the availability of task-specific energy functions, replacing the generic supervised loss. The proposed approach, termed Deep-Energy, trains a Deep Neural Network (DNN) to approximate this minimization for any chosen input. Once trained, a simple and fast feed-forward computation provides the inferred label. This approach allows us to perform unsupervised training of DNNs with real-world inputs only, and without the need for manually-annotated labels, nor synthetically created data. The code we supply here demonstrate this on three different tasks – seeded segmentation, image matting and single image dehazing.

Local Block Coordinate Descent (LoBCoD) Algorithm for the CSC Model

The Convolutional Sparse Coding Model has drawn much intention in the past decade, due to its relevance in handling image processing tasks, and due to its connection to Convolutional Neural Networks (CNN). Two central questions that this model pose are (i) Pursuit: Given the model filters and an input image, find the appropriate sparse vector to represent the image effectively; and (ii) Learning: Given a set of images, find the filters that best represent this corpus. Both these questions have been the topics of many papers, offering various algorithms and experiments.

Our recent paper (see below) offers a very appealing answer to both the pursuit and the learning, by operating locally on the incoming images, and using a simple yet effective optimization strategy – coordinate descent. Our algorithms can operate on-line (even on one image), their performance is very competitive and often times state-of-the-art, and their code is simple to follow and deploy. The accompanying software package reproduces the results in the above-mentioned paper, along with a demonstration of these algorithms on two image processing applications.

Regularization by Denoising (RED)

The work reported in our RED paper (see below) presents a novel way to regularize inverse problems by leveraging almost any denoising algorithm. Our scheme, called RED, leads to very flexible image restoration algorithms that apply denoising within their iterative process. The experiments reported in this paper show a tendency to state-of-the-art results. A software package reproducing this paper’s experiments is given here, under GitHub.

Graph Dictionary Learning

Yael Yankelevsky’s work on handling graph-based signals has been reported in several papers recent papers (see the journal publications list). The core idea has been to take into account the Laplacian matrix of the graph signals, and extend the dictionary leanring to accomodate for it. Indeed, our work had additional ideas incorporated within this scheme: (1) we learn the Laplacian matrix within the dictionary learning process; (ii) we take into account another Laplacian – one that accounts for interrelations between the given signals, thus turning our pursuit into a joint one; and (iii) we can handle high dimensional graphs by introducing double-sparsity and a graph-wavelet transform. Two accompanying packages are provided to reproduce all the results shown in these papers:

Multi-Scale EPLL

The work presented in this paper offers a multi-scale extension to the Expected Patch Log Likelihood (EPLL) method by Zoran and Weiss, by forcing the same prior on scaled-down versions of the image to be recovered. This paper motivates the multi-scale approach by first addressing a toy problem of Gaussian signals, for which it is shown how local patch averaging, EPLL and its multi-scale extension, are all approximating the global optimal filtering. Then the multi-scale EPLL is demonstrated for denoising, deblurring, and single-image super-resolution. The following freely available package contains the data and Matlab scripts of all the simulations presented in the above mentioned paper.

Linearized Kernel Dictionary Learning

The work presented in this paper describes a new approach for incorporating kernels into dictionary learning. In order to do so, we first approximate the kernel matrix using a cleverly sampled subset of its columns using the Nystrom method; secondly, as we wish to avoid using this matrix altogether, we decompose it by SVD to form new “virtual samples”, on which any linear dictionary learning can be employed.

Our method, termed “Linearized Kernel Dictionary Learning” (LKDL) can be seamlessly applied as a pre-processing stage on top of any efficient off-the-shelf dictionary learning scheme, effectively “kernelizing” it. In the paper we demonstrate the effectiveness of our method on several tasks of both supervised and unsupervised classification and show the efficiency of the proposed scheme, its easy integration and performance boosting properties. The following freely available package contains the data and Matlab scripts of all the simulations presented in the above mentioned paper.

Trainlets: Dictionary Learning in High Dimensions

The work reported in this paper describes a novel Dictionary Learning (DL) algorithm that is capable of handling very large image patches. Whereas classical DL algorithms, such as K-SVD, can handle small image patches (e.g., 8-by-8 pixels), this new Online Sparse Dictionary Learning (OSDL) can operates on small images of size 64-by-64 while still being very effective and relatively quick. This is achieved by harnessing three key ingredients: (i) The learned dictionary is structured, formed as a multiplication of a fixed dictionary by a sparse matrix; (ii) The chosen fixed dictionary is a cropped wavelet that exhibit no boundary problems; and (iii) In order to learn rather quickly, an online scheme for this DL is proposed. The following freely available package contains our Matlab code to apply this algorithm and reproduce the results in the above-mentioned paper.

Single-Image Super Resolution via a Statistical Model

The work reported in this paper describes a scheme for single image super-resolution using a statistical prediction model based on sparse representations of low and high resolution image patches. The following freely available package contains our Matlab code to apply the suggested scheme on any test image in one of three scenarios (blur kernel and scale factor) considered in the above-mentioned paper. Please note that the training part of the code is not released in this package, since it is much heavier to run.

Patch-Ordering for Regularizing Inverse Problems

In an earlier work we have shown that extracting all the overlapping patches from an image and ordering them to form the shortest path could be used in various ways to gain non-local processing of visual data. In our 2016 paper published in SIAM Journal on Imaging Sciences, Grisha Vaksman and I show how this could be used to regularized general inverse problems in imaging. We demonstrate the proposed scheme on a diverse set of problems: (i) severe Poisson image denoising, (ii) Gaussian image denoising, (iii) image deblurring, and (iv) single image super-resolution.This package contains all the necessary code to reproduce the experiments in this paper.

Image Processing by Patch-Ordering

In a recent work with Idan Ram and Israel Cohen we have proposed handling of elementary image processing tasks with novel algorithms that are based on patch ordering. The core idea is to extract all the overlapped patches from the image, and permute them to form the shortest path. Once ordered, one can apply simple 1D filtering methods to the resulting ordered values, and obtain highly effective results.

The work reported in this IEEE-TIP paper describes the very core idea behind this novel approach and demonstrates it for denoising and inpainting. We provide this freely available package, which contains all the Matlab code to reproduce the results in paper, along with a demonstration of the core idea of ordering and the regularity it leads to, which explains why this method works. A second Matlab package is also available, reproducing the results presented in our ICASSP-2013 paper, which treated the 1D ordered signal by the Non-Local-Means filter.

Boosted Dictionary Learning

The work reported in this paper describes a fascinating joint work with Leslie N. Smith from the Naval Research Laboratory (NRL) in Washington DC. This work proposes two simple yet very effective ways to improve dictionary learning algorithms: (i) improving the dictionary update stage by fixing the supports and seeking the best non-zero values AND the corresponding atoms; and (ii) propagating the pursuit results form one iteration to the next, thereby saving computations. The following freely available package contains all our Matlab code to reproduce the results of the above-mentioned paper.

Analysis K-SVD

The work reported in this paper describes a dictionary learning algorithm for the analysis model, an alternative viewpoint to sparse and redundant representations. This model assumes that multiplication of the signal by an appropriate analysis operator (dictionary) leads to a sparse outcome. Specifically, the signal lies in a low-dimensional subspace determined by the dictionary atoms (rows in the operator) indexed in the signal’s co-support (the indices of the zeros in the sparse representation). To learn the analysis dictionary, we take an approach that is parallel and similar to the one adopted by the K-SVD algorithm that serves the corresponding problem in the synthesis model. The effectiveness of our proposed dictionary learning is demonstrated both on synthetic data and real images, showing a successful and meaningful recovery of the analysis dictionary. The following freely available package contains all our Matlab code to reproduce the results of the above-mentioned paper.

The Boltzmann Machine Model

The work reported in this paper describes a statistical model that takes into account dependencies between the dictionary atoms and shows how this model can be used for sparse signal recovery. We follow the suggestion of several recent works (see the paper for more details) and model the sparsity pattern by a Boltzmann machine (BM), a commonly used graphical model. In this work we address topics like pursuit of the sparse representations and learning of the Boltzmann parameters. The effectiveness of our proposed approach is demonstrated both on synthetic data and image patches. The following freely available package contains all our Matlab code to reproduce the results of the above-mentioned paper.

Single-Image Super-Resolution

The work reported in this paper describes a single-image super-resolution algorithm based on a pair of dictionaries and sparse representations. This work is a direct extension of an earlier work done by Yang, Wright, Huang and Ma from UIUC (see the paper for more details). The following freely available package contains all our Matlab code to reproduce the results of the above-mentioned paper, along with the comparisons to Yang’s work. This package contains the K-SVD and OMP code, so as to keep it complete. Furthermore, it also contains the software supplied to us by Jianchao Yang for reproducing their results. We would like to thank Jianchao for sharing his code with us and allowing us to include it in this package, and we should note that he has a more updated version of his code, which has been posted more recently on his web-page.

My Book's Matlab Package

My book Sparse and Redundant Representations: From Theory to Applications in Signal and Image Processing (Springer, 2010) is accompanied by a Matlab package that can be downloaded freely. This package contains a long series of functions and scripts that cover most of the algorithms described in the book. Also, this package reproduces most of the figures in it. More information on the book itself can be found in Amazon – Books.

SparseLab

SparseLab is a Matlab software package managed by David L. Donoho and his team. It provides various tools for sparse solution of linear systems, least-squares with sparsity, various pursuit algorithms, and more. It also includes Matlab simulations that reproduce the following papers that I coauthored):

  • David Donoho and Michael Elad, On the Stability of the Basis Pursuit in the Presence of Noise.
  • David Donoho, Michael Elad, and Vladimir Temlyakov, Stable Recovery of Sparse Overcomplete Representations in the Presence of Noise.
  • Michael Elad, Optimized Projections for Compressed-Sensing,
  • Michael Elad, Why Simple Shrinkage is Still Relevant for Redundant Representations?
K-SVD

In a joint work with Michal Aharon and Freddy Bruckstein, we studied ways to train a dictionary to lead to sparse representation of training signals. The developed algorithm, called K-SVD, along with some demonstration of its use for denoising, are available as Matlab toolbox package that was organized by Ron Rubinstein, and it can be downloaded from his web-page.

Super-resolution

A joint work with Prof. Peyman Milanfar and his students, Sina Farsiu and Dirk Robinson, resulted in a MATLAB software package for super-resolution. This package contains various algorithms we developed in our joint work (robust super-resolution, mosaiced-super-resolution, dynamic super-resolution) and more. The software is available here and is managed by Peyman.

Polar-FFT

Our (joint work with Amir Averbuch, Raphy Coifman, David Donoho, and Moshe Israeli) Polar-FFT Matlab package can be downloaded HERE. This is the first version we release, which means that we will probably need to update it based on feedback we will get. The current package contains many functions and programs that we have used as part of the development of the fast polar Fourier transform, and those include the software that was used to generate the figures in the PAPER. I suggest that you start by looking at the file A_Content.m to see the various functions supplied. Two things that are still very much missing are the inverse transform, and MEX implementation of the main parts, which we hope to add soon.