Skip to main content

Module description - Deep Learning

Number
del
ECTS 4.0
Level Advanced
Content Deep Learning has gained enormous attention in recent years. Significant improvements have been achieved in application areas that were traditionally considered easy for humans but difficult for machines - two prominent examples are computer vision (e.g. for object recognition) and natural language processing (e.g. for language translation or understanding).

The main goal in this module is to gain a good understanding of the basics of Deep Learning and to learn about some typical applications. This includes knowing some well-established Deep Learning architectures, how to implement and train them, and in which problems they can be usefully applied.
Learning outcomes Multilayer Perceptron
Students know the basic building blocks of a multi-layer perceptron (MLP) and understand what MLPs are useful for. They also understand how single hidden layer networks can approximate arbitrary functions and have an intuition for how multi-layer networks can help extract complex patterns in the data, as well as counter the curse of dimensionality by efficiently representing hierarchical concepts.

Training of neural networks
The students have detailed knowledge of the backpropagation algorithm (the "work horse" for learning neural networks), know about its weaknesses in practical application (e.g. "vanishing / exploding gradients") and know measures how these can be mitigated (e.g. different activation functions, batch normalization, parameter initialization).

They know how various regularization techniques work (e.g., dropout, data augmentation), how to apply them and what they are used for. Furthermore, they know the most important variations to straight gradient descent to improve the convergence behavior (Adam, etc.).
Finally, students know how to use and interpret learning curves and performance metrics to assess training success.

CNN
Students know how the elementary building blocks of Convolutional Neural Nets (CNN) work and how they can be usefully combined. They know the most important imagenet architectures and how these can be used practically with transfer learning. They know how the information encoded in trained networks can be made visible, e.g. with the help of Activation Maximization or Saliency Maps. Furthermore, they know the concept of autoencoders and what they can be used for (e.g. for pre-training or layer-wise training). They will be able to implement some example applications on their own.

RNN
The students know the most important characteristics of Recurrent Neural Networks (RNN) and what they can be used for (as classifier, sequence-to-sequence, encoder/decoder). They understand the weaknesses of 'simple' RNNs and how these can be partially overcome with the help of long-term memory in GRUs or LSTMs. You will be able to implement some example applications on your own.

Tools
The students learn to use one of the most important deep learning frameworks (currently TensorFlow or PyTorch). For some selected frameworks, they can implement typical application examples (including data loading, model building, training). Finally, they also know how to efficiently execute code (implemented in these frameworks), in particular also on the GPU.

Furthermore, the students know the common possibilities to evaluate models, to monitor their training or to analyze their functioning.


Evaluation Mark
Built on the following competences

  • Foundation in Machine Learning,
  • Foundation in Linear Algebra, Foundation in Calculus, Advanced Calculus.


Modultype Portfolio Module
Diese Seite teilen: