This repository contains my solutions for the TDDE70 Deep Learning course (Linköping University, Spring 2024), including an intro notebook and four labs.
- Intro: PyTorch basics: tensors, GPU, autograd, linear regression
- Lab 0: PyTorch & NN fundamentals
- Lab 1: Autoencoders & U‑Net
- Lab 2: Denoising Diffusion Probabilistic Models (DDPM)
- Lab 3: Graph Neural Networks (CGCNN)
Get started with:
- Tensors & GPU support
- Autograd & computational graphs
- Building & training a linear regression model
- Custom Modules: Define
nn.Moduleand fully‑connected layers - Data Loading: Convert MNIST to tensors, use
DataLoader - Simple CNN: Conv layers with batch‑norm & dropout
- Training & Eval: Optimizers, training loops, accuracy metrics
- Robustness: Test on rotated MNIST digits
- Data Prep: Custom
Datasetfor denoising & segmentation (GTAV) - Model Design:
DoubleConv,Down,Up,UpSkipblocks; Autoencoder & U‑Net - Training: Denoising (MSE) & segmentation (weighted CE) with
Trainerclasses - Enhancements: Skip connections & EMA weight averaging
- Theory: DDPM forward/backward processes, noise schedule
- Implementation: MLP denoiser with positional embeddings & noise utilities
- Training & Sampling: T=50 steps, L₂ loss, visualize samples
- Architecture: U‑Net with timestep & label embeddings, self‑attention
- cDDPM: T=1000 diffusion steps on 32×32 MNIST
- Sampling: Generate digits 0–9 conditioned on labels
- PyG Basics:
Data&DataLoaderfor graphs - MPNN Equations: Derive CGCNN’s message & update functions
- CGCNNLayer: Gated message passing with BatchNorm
- Full Model: Stack layers, global mean pooling, MLP head
- Training: Compare invariant (distance) vs non‑invariant (vector) edge features






