The course goal is to familiarize students with deep learning for vision, text, reinforcement learning and multimodal combinations thereof. The meaning of deep learning for this course is the training and application of neural networks as prediction models for various setups of input and output modalities. The course will include coding a subset of approaches for vision (so to avoid overlap with computer vision), approaches for sequential data such as 1D‐CNNs, temporal causal networks and recurrent neural networks, multimodal approaches, attention models, explainable AI and generative adversarial neural nets. The coding will cover the whole chain from data loader to training to parameter tuning to performance evaluation. The course will also focus on important practical aspects which are required to make the training part of deep learning work on smaller datasets such as transfer learning, the various forms of data augmentation, different optimizers and learning rate tuning.
Pre-requisite or Co-requisite
By the end of the course, students will be able to
- Explain the concept of ﴾discriminative﴿ learning from data, generalization and overfitting.
- Be able to explain how the decision boundary of a linear neuron depends on its parameters in a general vector space.
- Be able to explain what backpropagation is used for.
- List for different prediction problems. Suitable outputs of predictors and suitable loss functions.
- Explain the basic ideas behind convolutional and recurrent neural networks.
- Be able to retell methods of data augmentation.
- Be able to use deep learning toolboxes for loading data, training and performance evaluation of deep neural networks.
- Construct data loaders for custom datasets of various types with various types of ground truth annotations, such as images, sequential data e.g. text and multi‐modal data.
- Being able to set up neural networks for vision, and apply training algorithms to them and evaluate the performance of transfer learning tasks with two state of the art deep learning toolboxes.
- Being able to set up neural networks for sequence classification, and apply training algorithms to them and evaluate the performance of transfer learning tasks with one state of the art deep learning toolboxes.
- Be able to explain the functioning principle of a generative adversarial neural network.
- Evaluate deep learning models and loss functions by suitability based on the intended prediction outputs for several prediction task types (ad: LO suitable loss functions).
- Compute the terms used in backpropagation for a given neural network topology (ad: LO backprop)
- Sketch the set of points for which a linear unit has constant outputs, sketch the directions for which the function values change fastest for a linear unit (ad LO decision boundary of a linear unit).
- For a given model and a layer give arguments when to use a fully connected and when to use convolutional layers.
for a given problem give arguments when to use feedforward and when to use recurrent neural networks ad LO convolutional NNs, recurrent NNs.
- Create code for dataloaders for custom datasets with groundtruths for image and text data.
- Create code for neural networks for vision, which loads data, applies training algorithms and evaluates the prediction performance with two state of the art deep learning toolboxes.
- Create code for neural networks for sequence classification, which loads data, applies training algorithms and evaluates the prediction performance of transfer learning tasks with one state of the art deep learning toolbox.
- Compute train and test loss curves from a training process and be able to examine the amount of overfitting when comparing different pairs of train/test loss curves (ad LO generalization and overfitting).
- Employ data augmentation methods and transfer learning in code, evaluate their impact on performance (ad LO data augmentation).
- Produce example code for a generative deep neural network, be able to give possible use cases for GANs (ad LO generative adversarial neural network).
- Introduction, Review of ML
- Syntactic Tagging, Word Senses and Embeddings
- Language Modeling
- Chunking (Shallow Syntactic Parsing)
- Information Extraction
- Syntactic Parsing
- Semantic Role Labeling (Shallow Semantic Parsing)
- Semantic Parsing
- Sentiment Analysis
- Text Generation
- Machine Translation