GIACOMO BORACCHI - TEACHING
 

Advanced Deep Learning Models and Methods  
AA 2021/2022, PhD Course, Politecnico di Milano


Overview:
The course presents advanced learning problems and how deep learning models have been applied to successfully solve them. In particular, models are considered advanced either in terms of non-conventional data-type being handled (e.g. graphs), the conditions where the learner has to operate (lack / shortage of supervision), or the problem is extremely challenging by itself (e.g. generating a natural image).

More information on the Course program page

Organizers:
Giacomo Boracchi, Matteo Matteucci. Politecnico di Milano.

Dates:
From February 7th 2022 to February 18th 2022, 6 seminars of 4 hours each.

Enrollment:
Students from other universities are welcome to attend the lectures, but in general cannot take the exam nor receive an official attendance certificate from our secretariat. Students that need an official attendance certificate have to perform an official registration following the procedure on the PhD website . In case this case, an administrative fee (32E) is requested.

Blended Teaching Modality:
Either in presence or in my webex room. Please check details below, a lecture will be held entirely online

Course Logistics: Please check these slides .


Schedule and Abstracts:

Deep Unsupervised Learning in Images
Giacomo Boracchi Professor at Politecnico di Milano
February 7th, 14:15 - 18:30 Sala Conferenze, DEIB, Building 20
Early deep learning models were primarily solving supervised visual recognition problems such as classification, segmentation, detection. More recently, there has been a surge of deep learning methods addressing unsupervised vision tasks, including image restoration, enhancement and anomaly detection. As for their supervised counterparts, deep neural networks (and sometimes even famous pre-trained architectures) turned out to be more effective than traditional model-based solutions. During this lecture I will provide an overview of the anomaly detection problem, describe the most relevant deep learning solutions in the literature, and discuss the most relevant challenges to be addressed by deep learning.
Slides for Anomaly Detection ,
[Additional Material] Tutorial on Anomaly Detection ,
Slides for Image Restoration ,
Video Recording .

Learning with limited supervision
Alessandro Giusti Professor at IDSIA, Lugano
February 9th, 9:00 - 13:00 Sala Conferenze, DEIB, Building 20
Successful approaches to self-supervised learning, a classic paradigm in robotics which consists in the automated acquisition of ground truth labels by exploiting multiple sensors during the robot's operation. Domain adaptation techniques, which tackle the issue of handling differences between the training and the deployment domains.
Self Supervised Learning Slides part 1,
Self Supervised Learning Slides part 2,
Self Supervised Learning Slides part 3,
Video Recording .

Deep Reinforcement Learning
Alessandro Lazaric Facebook Paris
February 10th, 14:15 - 18:30, Online Lecture
Reinforcement learning (RL) focuses on designing agents that are able to learn how to maximize reward in unknown dynamic environments. Unlike in other fields of machine learning, an RL agent needs to learn without a direct supervision of the best actions to take and it solely relies on the interaction with the environment and a (possibly sparse and sporadic) reward signal that implicitly defines the task to solve. Deep learning techniques can be effectively integrated into "standard" RL algorithms to learn representations of the state of the environment that allow for better generalization. Some of these techniques, such as DQN and TRPO, are nowadays at the core of the major successes of RL such as achieving super-human performance in games (e.g., Atari, StarCraft, Dota, and Go) as well simulated and real robotic tasks.
Slides,

Deep Learning on Graphs and Structured Computation Models
Jonathan Masci NNAISENSE SA, Switzerland
February 16th, 14:15 - 18:30, Sala Conferenze, DEIB, Building 20
A pragmatic introduction to graph convolutions, both in the spectral and spatial domain, and to the message passing framework. Applications and recent achievements in the field, starting with node and graph classification in the inductive and transductive setting, and progressing to finally realize that popular methods in meta-learning, one-shot and few-shot learning, structured latent space models are particular cases of the Structured Computational Model. Outlook of where the field is going and of new and exciting research directions and industrial applications that are waiting to be revolutionized.

slides, video recording,

Privacy Preserving Learning
Matteo Matteucci Politecnico di Milano
February 18th, 13:15 - 17:30, Room 3.0.2
Deep learning models have shown impressive performance, especially when lots of data is used to train them. When private data are used for training, some concern arises about the privacy of the training process and the privacy leaks which might happen in the resulting models. The federated learning framework will be presented as a technique to secure distributed training describing the most common algorithms and approaches as well as possible threats. Regarding information leakage in trained models, we will discuss the framework of differential privacy, the algorithms to implement it in the deep learning domain, and possible attacks to privacy such as model inversion and membership inference. Finally, synthetic data generation will be proposed as a possible solution for both privacy-preserving training and privacy-preserving models.

Intro slides,
Federated Learning slides,
Differential Privacy slides,
Webex recording.

Variational AutoEncoders with Applications to Anomaly Detection
Luigi Malago' TINS, RM
February 25th, 13:30 - 18:30, Room 5.1.1
Variational AutoEncoders (VAEs) are generative models which consists of two networks: an encoder which maps the input to the parameters of a probability density function over the latent space, followed by a decoder which maps latent variables to a probability density function over the space of the visible variables. VAEs are usually trained using variational inference approaches, in particular by maximizing a lower-bound for the log-likelihood, since training the model directly by optimizing the log-likelihood is not computationally efficient. In this short course, starting from notions of Bayesian statistics we will explain how a VAE can efficiently approximate complex distributions of samples. Next, we will review recent advances in the literature, which allow us to improve the performance over vanilla VAEs. In the second part of this short course, we are going to present some applications where VAEs are used for anomaly detection tasks. In particular we will focus on the detection of anomalies in images as well as on time series.

Slides on Variational AutoEncoders,
Slides on Detection of Tumours in Brain MRIs,
Slides on Anomlay Detection in Heartbeats,
webex recording,