| Day | Topic | Goal |
|---|---|---|
| 1 | What is Deep Learning? | Introduction, history, real-world examples, first neuron intuition, forward propagation of single neuron |
| 2 | Neurons, Weights, Bias & Activations | Deep dive into neuron structure, weights, bias, activation functions, visualizations |
| 3 | Forward Propagation | Manual calculations, numpy implementation, ReLU, Sigmoid, Tanh |
| 4 | Tiny Neural Network | Build 2-layer network from scratch in numpy |
| 5 | Loss Functions | MSE, Cross-Entropy, small examples |
| 6 | Optimizers | Gradient Descent intuition, learning rate, update rules, simple code |
| 7 | Mini Exercise | Manual neuron calculations, small network experiments |
| 8 | Phase Summary | wrap-up, code + visuals |
| Day | Topic | Goal |
|---|---|---|
| 9 | Gradient & Derivative Intuition | Calculus review, small examples |
| 10 | Backpropagation Basics | Manual chain rule calculations |
| 11 | Backpropagation in Numpy | Tiny network training step-by-step |
| 12 | Full Forward + Backprop Example | Numpy mini training loop |
| 13 | PyTorch Introduction | Tensors, basic operations, GPU usage |
| 14 | PyTorch Neural Network | Build first simple model |
| 15 | Training loop in PyTorch | Forward + loss + backward + optimizer step |
| 16 | Overfitting vs Underfitting | Visualize loss curves, concept explanation |
| 17 | Validation & Test split | Data handling in PyTorch, metrics |
| 18 | Hyperparameters | Learning rate, epochs, batch size, grid search intuition |
| 19 | Phase Summary | Notebook wrap-up, code + visuals |
| Day | Topic | Goal |
|---|---|---|
| 20 | CNN introduction | Filters, stride, padding, convolution example |
| 21 | CNN Layers | Pooling, flatten, fully connected layers |
| 22 | CNN in PyTorch | Build simple CNN for MNIST |
| 23 | CNN training | Forward + backward + optimizer, visualize filters |
| 24 | RNN introduction | Sequence data, hidden state, unrolling |
| 25 | LSTM & GRU | Why LSTM > RNN, gates explanation |
| 26 | Simple RNN in PyTorch | Manual sequence prediction |
| 27 | LSTM example | Text sequence prediction |
| 28 | NLP preprocessing | Tokenization, embedding, padding |
| 29 | RNN mini project | Predict sentiment on small dataset |
| 30 | CNN + RNN comparison | When to use which, pros/cons |
| 31 | Regularization in CNN/RNN | Dropout, batch norm, visualization |
| 32 | Hyperparameters for CNN/RNN | Learning rate, optimizer tuning |
| 33 | Phase Summary | Notebook wrap-up with multiple small examples |
| Day | Focus | Goal |
|---|---|---|
| 34 | CNN Regularization | Dropout placement, BatchNorm behavior (train vs eval), overfitting control |
| 35 | CNN Weight Initialization | Xavier vs He, dead ReLU problem, empirical comparisons |
| 36 | CNN Optimizers | SGD vs Adam vs AdamW vs RMSProp, convergence tradeoffs |
| 37 | CNN Learning Rate Scheduling | StepLR, ReduceLROnPlateau, CosineAnnealing, LR intuition |
| 38 | CNN Data Augmentation | Geometric & color transforms, over/under-augmentation risks |
| 39 | CNN Multi-Class Classification | Softmax, CrossEntropy, class imbalance, top-k accuracy |
| 40 | CNN Multi-Label Classification | BCEWithLogits, threshold tuning, PR tradeoffs |
| 41 | CNN Evaluation & Debugging | Confusion matrix, ROC/PR curves, error analysis |
| 42 | CNN Early Stopping & Checkpointing | Validation-driven stopping, best-model saving |
| 43 | CNN Hyperparameter Tuning | Batch size–LR coupling, weight decay, controlled experiments |
| 44 | CNN Advanced Mini Project | Apply all CNN techniques end-to-end (no shortcuts) |
| 45 | CNN Mastery Validation | Rebuild CNN from scratch, justify every design decision |
| Day | Focus | Goal / What You Actually Master |
|---|---|---|
| 46 | RNN Training Pathologies | Vanishing/exploding gradients, why vanilla RNNs fail |
| 47 | Gradient Clipping & Masking | Clip-by-norm, padding, masking variable-length sequences |
| 48 | LSTM Deep Dive | Gate mechanics, memory flow, stability intuition |
| 49 | GRU vs LSTM | Speed vs capacity, convergence behavior, use cases |
| 50 | RNN Regularization | Dropout in RNN/LSTM, recurrent dropout realities |
| 51 | RNN Optimizers & Scheduling | Adam instability, LR sensitivity, practical tuning |
| 52 | Sequence Padding & Batching | Packed sequences, performance implications |
| 53 | RNN Evaluation | Token vs sequence accuracy, exposure bias |
| 54 | RNN Hyperparameter Tuning | Hidden size, layers, sequence length tradeoffs |
| 55 | RNN Mini Project | Text or sequence task with clean training & evaluation |
| 56 | RNN Refinement | Debug instability, improve generalization |
| 57 | RNN Mastery Validation | Build RNN/LSTM from scratch and defend every choice |