Talks

Details view List view

Deep Learning

Sparse Modeling of Data and its Relation to Deep Learning
November 1, 2019.
Princeton - DeepMath Conference

Sparse approximation is a well-established theory, with a profound impact on the fields of signal and image processing. In this talk we start by presenting this model and its features, and then turn to describe two special cases of it – the convolutional sparse coding (CSC) and its multi-layered version (ML-CSC). Amazingly, as we will carefully show, ML-CSC provides a solid theoretical foundation to … deep-learning. Alongside this main message of bringing a theoretical backbone to deep-learning, another central message that will accompany us throughout the talk: Generative models for describing data sources enable a systematic way to design algorithms, while also providing a complete mechanism for a theoretical analysis of these algorithms’ performance. This talk is meant for newcomers to this field – no prior knowledge on sparse approximation is assumed.

This is an INVITED talk in this event.
למידה עמוקה - המהפיכה שתשנה את חיינו
October 29, 2019.
יובל ה-50 לפקולטה למדעי המחשב בטכניון

למידה עמוקה (deep learning)  הוא תחום שישנה את חיינו – תיקון – הוא כבר משנה את חיינו. בהרצאה קצרה זו שניתנה ביובל ה-50 לפקולטה למדעי המחשב, אני מספר את סיפורו המרתק של התחום הזה ואת התהפוכות שהוא עבר בששים השנים האחרונות. הרצאה זו נועדה לקהל הרחב ולא דורשת כל ידע מוקדם.

Sparse Modelling of Data and its Relation to Deep Learning
June 27, 2019.
ETH - FIM - Institute for Mathematical Research: Series of Lectures on Waves and Imaging (III)

Sparse approximation is a well-established theory, with a profound impact on the fields of signal and image processing. In this talk we start by presenting this model and its features, and then turn to describe two special cases of it – the convolutional sparse coding (CSC) and its multi-layered version (ML-CSC). Amazingly, as we will carefully show, ML-CSC provides a solid theoretical foundation to … deep-learning. Alongside this main message of bringing a theoretical backbone to deep-learning, another central message that will accompany us throughout the talk: Generative models for describing data sources enable a systematic way to design algorithms, while also providing a complete mechanism for a theoretical analysis of these algorithms’ performance. This talk is meant for newcomers to this field – no prior knowledge on sparse approximation is assumed.

This is a KEYNOTE talk in this event.
Sparse Modeling and Deep Learning
January 9, 2019.
QBI (Quantitative BioImaging Conference) 2019, Rennes, France

Sparse approximation is a well-established theory, with a profound impact on the fields of signal and image processing. In this talk we start by presenting this model and its features, and then turn to describe two special cases of it – the convolutional sparse coding (CSC) and its multi-layered version (ML-CSC). Amazingly, as we will carefully show, ML-CSC provides a solid theoretical foundation to … deep-learning. Alongside this main message of bringing a theoretical backbone to deep-learning, another central message that will accompany us throughout the talk: Generative models for describing data sources enable a systematic way to design algorithms, while also providing a complete mechanism for a theoretical analysis of these algorithms’ performance. This talk is meant for newcomers to this field – no prior knowledge on sparse approximation is assumed.

This is a KEYNOTE talk in this event.
Sparse Modelling of Data and its Relation to Deep Learning
November 27, 2018.
EUVIP 2018, Tampere, Finland

Sparse approximation is a well-established theory, with a profound impact on the fields of signal and image processing. In this talk we start by presenting this model, and then turn to describe two special cases of it – the convolutional sparse coding (CSC) and its multi-layered version (ML-CSC). Amazingly, as we will carefully show, ML-CSC provides a solid theoretical foundation to … deep-learning. This talk is meant for newcomers to these fields – no prior knowledge on sparse approximation is assumed.

This is a KEYNOTE talk in EUVIP 2018.
Sparse Modeling and Deep Learning
July 14, 2018.
ICML 2018, Stockholm Sweden

Sparse approximation is a well-established theory, with a profound impact on the fields of signal and image processing. In this talk, we describe a special case of this model— the multi-layered convolutional sparse coding (ML-CSC) construction. As we will carefully show, ML-CSC provides a solid theoretical foundation to the field of deep learning, explaining the used architectures, their performance limits, and prospects for future alternatives.

This is an invited talk in an ICML workshop, titled "The theory of deep learning", and organized by Rene Vidal, Joan Bruna and Raja Giryes. The same talk has been given in a symposium on deep-learning in ICSEE in Eilat on December 13th 2018.
Sparse Modelling in Image Processing and Deep Learning
July 9, 2018.
THE 10th IEEE Sensor Array and Multichannel (SAM) Signal Processing Workshop, SHEFFIELD UK

Sparse approximation is a well-established theory, with a profound impact on the fields of signal and image processing. In this talk we start by presenting this model and its features, and then turn to describe two special cases of it – the convolutional sparse coding (CSC) and its multi-layered version (ML-CSC). Amazingly, as we will carefully show, ML-CSC provides a solid theoretical foundation to … deep-learning. Alongside this main message of bringing a theoretical backbone to deep-learning, another central message that will accompany us throughout the talk: Generative models for describing data sources enable a systematic way to design algorithms, while also providing a complete mechanism for a theoretical analysis of these algorithms’ performance. This talk is meant for newcomers to this field – no prior knowledge on sparse approximation is assumed.

This is a PLENARY talk that was given in SAM 2018, in Sheffield, UK. This is a joint work with Vardan Papyan, Yaniv Romano, and Jeremias Sulam.
Sparse Modeling in Image Processing and Deep Learning
March 6, 2018
IMVC 2018, Tel-Aviv, Israel

Sparse approximation is a well-established theory, with a profound impact on the fields of signal and image processing. In this talk we start by presenting this model, and then turn to describe two special cases of it — the convolutional sparse coding (CSC) and its multi-layered version (ML-CSC). Amazingly, as we will carefully show, ML-CSC provides a solid theoretical foundation to … deep-learning. This talk is meant for newcomers to these fields – no prior knowledge on sparse approximation is assumed.

This is a KEYNOTE talk that was given in IMVC 2018, in Tel-Aviv. This talk was also given in a seminar in the Hebrew University of Jerusalm on May 21st 2018. This is a joint work with Vardan Papyan, Yaniv Romano, and Jeremias Sulam.
Sparse Modeling in Image Processing and Deep Learning
5-9 February 2018
IPAM 2018, Los Angeles, USA

Sparse approximation is a well-established theory, with a profound impact on the fields of signal and image processing. In this talk we describe two special cases of this model — the convolutional sparse coding (CSC) and its multi-layered version (ML-CSC). We show that the projection of signals (a.k.a. pursuit) to the ML-CSC model leads to various deep convolutional neural network architectures. This connection brings a fresh view to CNN, as we are able to accompany the above by theoretical claims such as uniqueness of the representations throughout the network, and their stable estimation, all guaranteed under simple local sparsity conditions. The ‘take-home-message’ from this talk is this: The ML-CSC model can serve as the theoretical foundation to deep-learning.

This is an invited talk that was given in IPAM (UCLA) during the "New Deep Learning Techniques" program during February 5-9, 2018, in Los Angeles, USA. This is a joint work with Vardan Papyan, Yaniv Romano, and Jeremias Sulam.
Sparse Modeling in Image Processing and Deep Learning
18 of September 2017
ICIP 2017, in Beijing China

Sparse approximation is a well-established theory, with a profound impact on the fields of signal and image processing. In this talk we start by presenting this model and its features, and then turn to describe two special cases of it: the convolutional sparse coding (CSC) and its multi-layered version (ML-CSC). Amazingly, as we will carefully show, ML-CSC provides a solid theoretical foundation to the field of deep-learning. Alongside this main message of bringing a theoretical backbone to deep-learning, another central message that will accompany us throughout the talk: Generative models for describing data sources enable a systematic way to design algorithms, while also providing a complete mechanism for a theoretical analysis of these algorithms’ performance. This talk is meant for newcomers to this field – no prior knowledge on sparse approximation is assumed.

This is a KEYNOTE talk that was given in ICIP 2017, in Beijing China. This talk summarizes portions of the PhD work by my three PhD students, Vardan Papyan, Yaniv Romano, and Jeremias Sulam.
From Sparse Representations to Deep Learning
4-8 September 2017
Summer School "Signal Processing meets Deep Learning" in Capri

Within the wide field of sparse approximation, convolutional sparse coding (CSC) has gained increasing attention in recent years. This model assumes a structured-dictionary built as a union of banded Circulant matrices. Most of the attention has been devoted to the practical side of CSC, proposing efficient algorithms for the pursuit problem, and identifying applications that benefit from this model. Interestingly, a systematic theoretical understanding of CSC seems to have been left aside, with the assumption that the existing classical results are sufficient. In this talk we start by presenting a novel analysis of the CSC model and its as- sociated pursuit. Our study is based on the observation that while being global, this model can be characterized and analyzed locally.

We show that uniqueness of the representation, its stability with respect to noise, and successful greedy or convex recovery are all guaranteed assuming that the underlying representation is locally sparse. These new results are much stronger and informative, compared to those obtained by deploying the classical sparse theory. Armed with these new insights, we proceed by proposing a multi-layer extension of this model, ML-CSC, in which signals are assumed to emerge from a cascade of CSC layers. This, in turn, is shown to be tightly connected to Convolutional Neural Networks (CNN), so much so that the forward-pass of the CNN is in fact the Thresholding pursuit serving the ML-CSC model. This connection brings a fresh view to CNN, as we are able to attribute to this architecture theoretical claims such as uniqueness of the representations throughout the network, and their stable estimation, all guaranteed under simple local sparsity conditions. Lastly, identifying the weaknesses in the above scheme, we propose an alternative to the forward-pass algorithm, which is both tightly connected to deconvolutional and recurrent neural networks, and has better theoretical guarantees.

This 3-hours talk was given at the Summer School "Signal Processing meets Deep Learning" in Capri. This talk summarizes portions of the PhD work by my three PhD students, Vardan Papyan, Yaniv Romano, and Jeremias Sulam.
A Tale of Signal Modeling Evolution: SparseLand to CSC to CNN
June 1st, 2017
National University of Singapore

Within the wide field of sparse approximation, convolutional sparse coding (CSC) has gained increasing attention in recent years. This model assumes a structured-dictionary built as a union of banded Circulant matrices. Most of the attention has been devoted to the practical side of CSC, proposing efficient algorithms for the pursuit problem, and identifying applications that benefit from this model. Interestingly, a systematic theoretical understanding of CSC seems to have been left aside, with the assumption that the existing classical results are sufficient. In this talk we start by presenting a novel analysis of the CSC model and its as- sociated pursuit. Our study is based on the observation that while being global, this model can be characterized and analyzed locally. We show that uniqueness of the representation, its stability with respect to noise, and successful greedy or convex recovery are all guaranteed assuming that the underlying representation is locally sparse.

These new results are much stronger and informative, compared to those obtained by deploying the classical sparse theory. Armed with these new insights, we proceed by proposing a multi-layer extension of this model, ML-CSC, in which signals are assumed to emerge from a cascade of CSC layers. This, in turn, is shown to be tightly connected to Convolutional Neural Networks (CNN), so much so that the forward-pass of the CNN is in fact the Thresholding pursuit serving the ML-CSC model. This connection brings a fresh view to CNN, as we are able to attribute to this architecture theoretical claims such as uniqueness of the representations throughout the network, and their stable estimation, all guaranteed under simple local sparsity conditions. Lastly, identifying the weaknesses in the above scheme, we propose an alternative to the forward-pass algorithm, which is both tightly connected to deconvolutional and recurrent neural networks, and has better theoretical guarantees.

This talk was given at the workshop on Frame Theory and Sparse Representation for Complex Data, at the Institute for Mathematical Sciences (IMS) - National University of Singapore. This talk summarizes portions of the PhD work by my three PhD students, Vardan Papyan, Yaniv Romano, and Jeremias Sulam.
Style Transfer via Texture Synthesis
March 19th, 2017
Hebrew University

Style-transfer is a process of migrating a style from a given image to the content of another, synthesizing a new image which is an artistic mixture of the two. Recent work on this problem adopting Convolutional Neural-networks (CNN) ignited a renewed interest in this field, due to the very impressive results obtained. There exists an alternative path towards handling the style-transfer task, via generalization of texture-synthesis algorithms. I will present a novel such style-transfer algorithm that extends the texture-synthesis work of Kwatra et. al. (2005), while aiming to get stylized images that get closer in quality to the CNN ones.

This talk was given in the computer vision seminar in the Hebrew University. This is a joint work with Peyman Milanfar - Google Research.