- 3. Feature Engineering
- 4. ๐ Regression
- 5. ๐งโ๐ป Gradient Descent
- 6. ๐ฎ Regularization
- 7. ๐ Logistic Regression
- 8. ๐ด Decision Tree
- 9.๐ Voting Ensemble Learning
- 10. ๐๏ธ Bagging Ensemble Learning
| Topic | What You'll Learn | Notebook | Lecture |
|---|---|---|---|
| What is Feature Engineering | โ | โ | ๐ฅ |
| Column Transformer | How to transform columns | ๐จโ๐ป | ๐ฅ |
| Sklearn without Pipeline | Why avoiding pipelines can cause problems | ๐จโ๐ป | ๐ฅ |
| Sklearn with Pipeline | How to implement sklearn pipelines effectively | ๐จโ๐ป | ๐ฅ |
| Topic | What You'll Learn | Notebook | Lecture |
|---|---|---|---|
| Ordinal Encoding | Ordinal categorical data preprocessing using OrdinalEncoder() |
๐จโ๐ป | ๐ฅ |
| One Hot Encoding | Nominal categorical data preprocessing using OneHotEncoder() |
๐จโ๐ป | ๐ฅ |
| Function Transformer | Log, reciprocal transformation using FunctionTransformer() |
๐จโ๐ป | ๐ฅ |
| Power Transformer | Square, square root transformation using PowerTransformer() |
๐จโ๐ป | ๐ฅ |
| Binarization | Preprocessing with Binarizer() |
๐จโ๐ป | ๐ฅ |
| Binning | Preprocessing with KBinsDiscretizer() |
๐จโ๐ป | ๐ฅ |
| Handling Mixed Variables | Processing datasets with both numerical & categorical features | ๐จโ๐ป | ๐ฅ |
| Handling Date & Time | How to work with time and date columns | ๐จโ๐ป | ๐ฅ |
| Topic | What You'll Learn | Notebook | Lecture |
|---|---|---|---|
| Standardization | Preprocessing using StandardScaler() |
๐จโ๐ป | ๐ฅ |
| Normalization | Preprocessing using MinMaxScaler() |
๐จโ๐ป | ๐ฅ |
| Topic | What You'll Learn | Notebook | Lecture |
|---|---|---|---|
| Complete Case Analysis | Remove NaN values |
๐จโ๐ป | ๐ฅ |
| Arbitrary Value Imputation (Numerical) | Impute with arbitrary value using SimpleImputer() |
๐จโ๐ป | ๐ฅ |
| Mean/Median Imputation (Numerical) | Impute with mean/median using SimpleImputer() |
๐จโ๐ป | ๐ฅ |
| Missing Category Imputation (Categorical) | Fill missing with a label using SimpleImputer() |
๐จโ๐ป | ๐ฅ |
| Frequent Value Imputation (Categorical) | Replace missing with most frequent value | ๐จโ๐ป | ๐ฅ |
| Missing Indicator | Add binary flag for missing values (MissingIndicator()) |
๐จโ๐ป | ๐ฅ |
| Auto Imputer Parameter Tuning | Use GridSearchCV() to optimize imputer settings |
๐จโ๐ป | ๐ฅ |
| Random Sample Imputation | Fill missing values with random samples | ๐จโ๐ป | ๐ฅ |
| KNN Imputer | Use K-Nearest Neighbors to fill missing values | ๐จโ๐ป | ๐ฅ |
| Iterative Imputer | MICE-style multivariate imputation | ๐จโ๐ป | ๐ฅ |
| Topic | What You'll Learn | Notebook | Lecture |
|---|---|---|---|
| What is Outliers | Introduction to outliers and their impact | ๐จโ๐ป | ๐ฅ |
| Outlier Removal using Z-Score | Removing outliers using Z-Score | ๐จโ๐ป | ๐ฅ |
| Outlier Removal using IQR | Removing outliers using Interquartile Range (IQR) | ๐จโ๐ป | ๐ฅ |
| Outlier Removal using Percentiles | Removing outliers using Percentiles | ๐จโ๐ป | ๐ฅ |
| Topic | What You'll Learn | Notebook | Lecture |
|---|---|---|---|
| Feature Construction and Splitting | Extract useful data and split features | ๐จโ๐ป | ๐ฅ |
| Topic | What You'll Learn | Notebook | Lecture |
|---|---|---|---|
| Curse of Dimensionality | Introduction to the "curse" of high dimensions | ๐จโ๐ป | ๐ฅ |
| PCA Geometric Intuition (PCA) | Geometric understanding of PCA (Principal Component Analysis) | ๐จโ๐ป | ๐ฅ |
| PCA Problem Formulation & Solution | Formulating and solving PCA problems | ๐จโ๐ป | ๐ฅ |
| PCA Step by Step Implementation | Implementing PCA step by step | ๐จโ๐ป | ๐ฅ |
| PCA + KNN (MNIST Dataset) | Apply PCA and KNN on the MNIST dataset | ๐จโ๐ป | ๐ฅ |
| Topic | What You'll Learn | Notebook | Lecture |
|---|---|---|---|
| Simple LR from Scratch | Code implementation from scratch | ๐จโ๐ป | ๐ฅ |
| Sklearn LR | Using LinearRegression() from sklearn |
๐จโ๐ป | ๐ฅ |
| Regression Metrics | Understanding Rยฒ score, MSE, RMSE | ๐จโ๐ป | ๐ฅ |
| Geometric Intuition | Understanding the geometric intuition of MLR | ๐จโ๐ป | ๐ฅ |
| Multiple LR from Scratch | Code implementation from scratch | ๐จโ๐ป | ๐ฅ |
| Mathematical Formulation Sklearn LR | Using LinearRegression() from sklearn |
๐จโ๐ป | ๐ฅ |
| Polynomial LR | Preprocessing and using PolynomialFeatures() |
๐จโ๐ป | ๐ฅ |
| Topic | What You'll Learn | Notebook | Lecture |
|---|---|---|---|
| Gradient Descent | Basic Introduction to Gradient Descent | ๐จโ๐ป | ๐ฅ |
| Batch Simple GD | Implementing Simple Batch GD from Scratch | ๐จโ๐ป | ๐ฅ |
| Batch GD | Implementing Batch Gradient Descent from Scratch | ๐จโ๐ป | ๐ฅ |
| Stochastic GD | Implementing Stochastic Gradient Descent from Scratch | ๐จโ๐ป | ๐ฅ |
| Mini Batch GD | Implementing Mini-Batch Gradient Descent from Scratch | ๐จโ๐ป | ๐ฅ |
| Topic | What You'll Learn | Notebook | Lecture |
|---|---|---|---|
| Bias-Variance Trade-off | Understanding Underfitting & Overfitting | - | ๐ฅ |
| Ridge Regression Geometric Intuition (Part 1) | Introduction to Regularized Linear Models | ๐จโ๐ป | ๐ฅ |
| Ridge Regression Mathematical Formulation (Part 2) | Scratch for slope (m) and intercept (b) | ๐จโ๐ป | ๐ฅ |
| Ridge Regression Mathematical Formulation (Part 2) | Full Scratch Implementation | ๐จโ๐ป | ๐ฅ |
| Ridge Regression (Part 3) | Gradient Descent Implementation | ๐จโ๐ป | ๐ฅ |
| 5 Key Points about Ridge Regression | Q&A, Effects, and Insights | ๐จโ๐ป | ๐ฅ |
| Lasso Regression | Full Implementation | ๐จโ๐ป | ๐ฅ |
| Why Lasso Regression Creates Sparsity | Understanding Sparsity Effect | ๐จโ๐ป | ๐ฅ |
| ElasticNet Regression | Comparison and Effects | ๐จโ๐ป | ๐ฅ |
| Topic | What You'll Learn | Notebook | Lecture |
|---|---|---|---|
| LR 1 - Perceptron Trick | Why to use it, transformations, region concept | - | ๐ฅ |
| LR 2 - Perceptron Trick Code | Math to algorithm conversion | ๐จโ๐ป | ๐ฅ |
| LR 3 - Sigmoid Function | How the sigmoid function helps to find the error line | ๐จโ๐ป | ๐ฅ |
| LR 4 - Math Behind Optimal Line | Maximum likelihood, binary cross-entropy, gradient descent | - | ๐ฅ |
| Extra - Derivative of Sigmoid | Helps derive matrix form from loss function | - | ๐ฅ |
| LR 5 - Logistic Regression (Gradient Descent) | Scratch implementation | ๐จโ๐ป | ๐ฅ |
| LR 6 - Multinomial Logistic Regression | Softmax regression | ๐จโ๐ป | ๐ฅ |
| LR 7 - Non-Linear Regression | Polynomial features | ๐จโ๐ป | ๐ฅ |
| LR 8 - Hyperparameter | Sklearn documentation and hyperparameter tuning | - | ๐ฅ |
| P1 Classification Metrics | Accuracy, confusion matrix, Type I & II errors, binary vs. multi-class | ๐จโ๐ป | ๐ฅ |
| P2 Classification Metrics Binary | Precision, recall & F1 score (binary) | ๐จโ๐ป | ๐ฅ |
| P2 Classification Metrics Multi-Class | Precision, recall & F1 score (multi-class) | ๐จโ๐ป | ๐ฅ |
| Topic | What You'll Learn | Notebook | Lecture |
|---|---|---|---|
| D1 - Decision Tree Geometric Intuition | Entropy, Gini Impurity, Information Gain | - | ๐ฅ |
| D2 - Hyperparameters | Overfitting and Underfitting | ๐จโ๐ป | ๐ฅ |
| D3 - Regression Trees | Numerical Points | ๐จโ๐ป | ๐ฅ |
| D4 - Awesome Decision Tree | dtreeviz Library |
๐จโ๐ป | ๐ฅ |
| Topic | What You'll Learn | Notebook | Lecture |
|---|---|---|---|
| Intro to Ensemble Learning | Ensemble techniques in ML | - | ๐ฅ |
| VE1 - Voting Ensemble | Code overview | - | ๐ฅ |
| VE2 - Voting Classifier | Hard vs Soft voting | ๐จโ๐ป | ๐ฅ |
| VE3 - Voting Ensemble Regression | Ensemble for regression tasks | ๐จโ๐ป | ๐ฅ |
| Topic | What You'll Learn | Notebook | Lecture |
|---|---|---|---|
| BE1 - Introduction | Basics of bagging | ๐จโ๐ป | ๐ฅ |
| BE2 - Bagging Classifiers | Bagging for classification | ๐จโ๐ป | ๐ฅ |
| BE3 - Bagging Regressor | Bagging for regression | ๐จโ๐ป | ๐ฅ |
