Welcome to my 30-Day Machine Learning & Deep Learning Challenge repository! This repository is a structured learning journey designed to take you from foundational concepts to advanced deep learning architectures in just 30 days. The goal is not only to understand the theory but also to implement every concept practically in Python, and be able to explain it clearlyβeven to a beginner.
The purpose of this repository is to provide a complete, hands-on guide to Machine Learning (ML) and Deep Learning (DL) for learners of all levels. By following this challenge, you will:
- Understand the fundamental mathematics behind ML/DL concepts.
- Learn how to implement core algorithms from scratch in Python.
- Build a strong intuition for concepts through analogies and examples.
- Practice explaining topics in your own words to ensure deep comprehension.
- Have a reference that can be revisited, modified, and shared with others for educational purposes.
This repository is designed with volunteering and learning in mind, so anyone can follow along, experiment, and improve their ML/DL skills.
The repository is organized into 30 folders, one for each day of the challenge. Each folder contains:
day_<number>_<topic>
Example:day_01_loss_function
Inside each folder, you will find:
- Python script (
.py)- Contains full working examples of the topic.
- Includes step-by-step implementation, code comments, and outputs.
- ReadMe.md (optional for each day)
- Explains the topic in plain English.
- Includes analogies, mathematical explanations, and mini-exercises.
- Designed so you could explain the concept to a 10-year-old after studying.
ml_fundamentals_challenge/
βββ day_01_loss_function/
β βββ loss_function.py
β βββ README.md
βββ day_02_gradient_descent/
β βββ gradient_descent.py
β βββ README.md
βββ day_03_regularization/
β βββ regularization.py
β βββ README.md
...
βββ day_30_final_project/
β βββ final_project.py
β βββ README.md
βββ README.md
Each day follows a clear and structured path:
- Theory
- Full explanation of the concept with formulas.
- Analogies to make abstract ideas intuitive.
- Practical Implementation
- Python code with detailed comments.
- Example outputs to see results in action.
- Verification & Reflection
- Mini exercises to reinforce learning.
- Encouragement to explain the topic to others to solidify understanding.
| Week | Days | Focus Area | Key Topics |
|---|---|---|---|
| Week 1 | Days 1-7 | Deep Understanding of Loss & Gradient Descent | Loss Functions, Gradient Descent, Learning Rate, Momentum, Regularization |
| Week 2 | Days 8-14 | ML Foundations | Linear/Logistic Regression, Metrics, Decision Trees, Ensembles, Feature Engineering |
| Week 3 | Days 15-21 | Deep Learning Fundamentals | Perceptron, Neural Networks, Forward/Backward Propagation, Optimization |
| Week 4 | Days 22-30 | Advanced Deep Learning | CNNs, RNNs, LSTM, Transformers, Final Project |
Goal: Understand how models learn from the inside β loss, gradients, optimization steps.
| Day | Topic | Theory | Practice | Goal | 1-Minute LinkedIn Video |
|---|---|---|---|---|---|
| Day 1 | Loss Function Mathematics | MSE, Cross-Entropy, formulas, meaning | Implement MSE and Cross-Entropy with numpy | Explain loss functions to a 10-year-old | https://www.linkedin.com/posts/serhii-kravchenko1_ai-ml-machinelearning-activity-7362854560731734016-pKAE/ |
| Day 2 | Introduction to Gradient Descent | Derivative as direction of smallest change | Implement gradient descent for one variable | Understand "rolling down the hill" analogy | https://www.linkedin.com/posts/serhii-kravchenko1_ai-ml-machinelearning-activity-7367575410915565568-1euG/ |
| Day 3 | Multidimensional Gradient Descent | Gradients for vectors and matrices | Implement gradient descent for linear regression | Calculate gradient step manually | https://www.linkedin.com/posts/serhii-kravchenko1_ai-ml-dl-activity-7368267543695740930-X2Wh/ |
| Day 4 | Learning Rate, Momentum, RMSProp | Why step size regulation matters | Add momentum to gradient descent | Master optimization techniques | https://www.linkedin.com/posts/serhii-kravchenko1_ai-ml-machinelearning-activity-7368644528934797313-AKas/ |
| Day 5 | Regularization | L1, L2, Elastic Net, Dropout | Add L2 regularization to linear regression | Understand "penalty for complexity" | https://www.linkedin.com/posts/serhii-kravchenko1_ai-machinelearning-deeplearning-activity-7369021320895963137-Tdqd/ |
| Day 6 | Practice: Gradient Descent + Regularization | Combine concepts | Build model on synthetic data | Experiment with hyperparameters | https://www.linkedin.com/posts/serhii-kravchenko1_ai-ml-artificialintelligence-activity-7369390120652808194-B7w8/ |
| Day 7 | Week 1 Explanation | Review and solidify | Explain all concepts in your own words | Deep comprehension check | https://www.linkedin.com/posts/serhii-kravchenko1_ai-artificialintelligence-machinelearning-activity-7369739519358672896-Ac2G/ |
Goal: Build foundation for classical algorithms.
| Day | Topic | Theory | Practice | Goal | 1-Minute LinkedIn Video |
|---|---|---|---|---|---|
| Day 8 | Linear Regression | Formulas, MSE, gradient descent vs normal equation | Linear regression on Boston dataset | Master linear relationships | https://www.linkedin.com/posts/serhii-kravchenko1_ai-machinelearning-deeplearning-activity-7370093692860395520-1TJZ/ |
| Day 9 | Logistic Regression | Sigmoid, cross-entropy, gradient descent | Implement from scratch in Python | Understand classification basics | https://www.linkedin.com/posts/serhii-kravchenko1_ai-machinelearning-deeplearning-activity-7371156573056028672-r71Y/ |
| Day 10 | Classification Metrics | Accuracy, Precision, Recall, F1-score, ROC-AUC | Apply sklearn on simple classification | Evaluate model performance | https://www.linkedin.com/posts/serhii-kravchenko1_happy-day-10-of-our-ml-dl-challenge-activity-7371543490503053312-w63y/ |
| Day 11 | Decision Trees | Space partitioning, entropy, Gini | Build decision tree with sklearn | Understand tree-based decisions | https://www.linkedin.com/posts/serhii-kravchenko1_day-11-of-our-ml-challenge-decision-trees-activity-7373332438728589312-Xn8m/ |
| Day 12 | Ensembles: Random Forest, Gradient Boosting | Bagging vs Boosting concepts | Apply Random Forest on dataset | Master ensemble methods | https://www.linkedin.com/posts/serhii-kravchenko1_day-12-of-our-mldl-challenge-mastering-activity-7374071778450788352-5B7j/ |
| Day 13 | Feature Engineering & Scaling | Normalization, standardization, one-hot encoding | Preprocess real dataset | Prepare data for models | https://www.linkedin.com/posts/serhii-kravchenko1_day-13-of-our-ml-challenge-feature-engineering-activity-7374784863599599616-V38G/ |
| Day 14 | Week 2 Explanation | Review ML algorithms and preprocessing | Explain all concepts in your own words | Solidify ML fundamentals | https://www.linkedin.com/posts/serhii-kravchenko1_day-14-of-our-mldl-challenge-week-2-review-activity-7375539431258370048-qe8a/ |
Goal: Understand neural network structure and backpropagation.
Goal: Understanding CNN, RNN, LSTM, Transformer, Attention.
- Clone the repository:
git clone https://github.com/Serhii2009/ml_fundamentals_challenge
cd ml_fundamentals_challenge- Go through each folder day by day:
cd day_01_loss_function
# Open Python script and study it
python loss_function.py-
Read the README.md in each folder for explanations, analogies, and exercises.
-
Practice by modifying the code, experimenting with parameters, and solving exercises.
-
Explain each topic in your own words (even to a 10-year-old!)βthis is a key step for deep understanding.
- All code is written in Python 3, using NumPy, pandas, scikit-learn, and PyTorch/Keras for deep learning examples.
- Each day builds on the previous, so it's recommended to follow the sequence from Day 1 to Day 30.
- This repository is designed for self-learning, teaching, and collaboration.
By completing this 30-day challenge:
- You will have a solid understanding of ML and DL fundamentals.
- You will be able to implement algorithms from scratch and understand their inner workings.
- You will gain confidence to explain concepts clearly to others.
- You will have a structured portfolio of practical ML/DL projects.
Learning by doing, reflecting, and teaching is the fastest path to mastering Machine Learning and Deep Learning.
If you follow this repository day by day, and truly practice each topic, you will understand the math, the code, and the intuition behind every core concept.
This repository is licensed under the MIT License β see the LICENSE file for details.