GIACOMO BORACCHI - TEACHING
 

Learning Sparse Representations for Image and Signal Modeling  


AA 2020/2021, PhD Course, Politecnico di Milano


Mission:
The main goal of this course is to provide the student with an understanding of the most important aspects of the theory underlying sparse representation and, more in general, of sparsity as a form of regularization in learning problems. Students will have the opportunity to develop and understand the main algorithms for learning sparse models and computing sparse representations. These methods have wide applicability in computer science, and these will be a useful background for their research.
More information on the Course program page

Detailed Program (tentative):
  • Basics on linear orthonormal representations
    • Introduction: getting familiar with transform-domain representation and sparsity.
    • Basic ground: linear representations, norms, orthonormal basis/projections, sparsity in a strict ell0 sense.
    • Computer Lab: projection w.r.t. orthonormal basis, computing sparse solution. Introduce denoising as a running example.
  • Redundant representations and Sparse Coding (Minimum ell0 norm)
    • Sparse Coding w.r.t redundant dictionaries. Greedy Algorithms: Matching Pursuit, Orthogonal Matching Pursuit.
    • Computer Lab: Greedy Sparse-Coding Algorithms
    • An overview of theoretical guarantees and convergence results.
  • Convex Relaxation of the Sparse Coding Problem (Minimum ell1 norm)
    • Sparse coding as a convex optimization problem, connections with ell0 solutions.
    • BPDN and the LASSO, the two formulation and ell1 sparsity as a regularization prior in linear regression
    • Other norms promoting sparsity: visual intuition. Theoretical guarantees.
    • Minimum ell1 Sparse Coding Algorithms: Iterative Reweighted Least Squares, Proximal Methods, Iterative Soft Thresholding, ADMM.
    • Computer Lab: Minimum ell1 Sparse Coding Algorithms
  • Dictionary Learning
    • Dictionary Learning Algorithms: Gradient Descent, MOD, KSVD
    • Task-Driven Dictionary Learning for Supervised learning tasks
    • Computer Lab: Dictionary Learning
  • Structured Sparsity
    • Joint Sparsity, Group Sparsity, Sparse Coding Algorithms, LASSO and Group LASSO
    • Extended Problems: Matrix Completion / Robust PCA
    • Computer Lab: Structured Sparsity Algorithms
  • Extended sparse models
    • Convolutional Sparsity and their connections with Convolutional Networks.
    • Double Sparsity
  • Sparsity in engineering applications
    • Relevant unsupervised learning problems involving sparsity as a regularization prior: Image Denoising, Inpainting, Superresolution, Deblurring.
    • Supervised learning problems by Dictionary Learning and Sparse Coding: Classification and Anomaly Detection
    • Computer Lab: Classification and Anomaly Detection in images
Calendar:
  • - 12/04/2023 from 14:00 till 18:00 Sala Conferenze Emilio Gatti Ed. 20 - Piano Terra - Vano 023
  • - 19/04/2023 from 14:00 till 18:00 Sala Conferenze Emilio Gatti Ed. 20 - Piano Terra - Vano 023
  • - 26/04/2023 from 14:00 till 18:00 Sala Conferenze Emilio Gatti Ed. 20 - Piano Terra - Vano 023
  • - 03/05/2023 from 14:00 till 18:00 Sala Conferenze Emilio Gatti Ed. 20 - Piano Terra - Vano 023
  • - 10/05/2023 from 14:00 till 18:00 Sala Conferenze Emilio Gatti Ed. 20 - Piano Terra - Vano 023
  • - 17/05/2023 from 14:00 till 19:00 Sala Conferenze Emilio Gatti Ed. 20 - Piano Terra - Vano 023
Lectures will be held in presence and streamed on Cisco Webex: Giacomo Boracchi Room. In person attendance is required to accomplish course requirements. Exceptions can be granted upon requets to students that are abroad for PhD-related reasons.


Teaching Materials: An hard copy of notes will be provided to all the participants. On this website we will upload additional teching materials before each lecture stars.

Fisrt Lecture: Background on Sparsity and Transform Domain Signal Processing Second Lecture: minimum ell0 algorithms Third Lecture: Dictionary Learning algorithms, Inpainting and Theory behind ell0 Fourth Lecture: minimum ell1 algorithms Fifth Lecture: IRLS, Structured Sparsity, Sparsity in Statistics Sixth Lecture: Applications and Convolutional Sparsity Final assessment
  • Final assessment will be based on class participation and on a final discussion about the course and the assignments. Please solve as much as you can all the assignments and the codes provided during classes.
  • If you are particularly interested, we can agree on a project on the course topics.
  • For any enquiry or when you are ready for the final discussion, just drop me an email.