Skip to content

A comprehensive 30-day structured learning journey from ML fundamentals to advanced deep learning architectures with hands-on Python implementations and clear explanations

License

Notifications You must be signed in to change notification settings

Serhii2009/ml_fundamentals_challenge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

38 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ”₯ 30-Day Machine Learning & Deep Learning Challenge

Welcome to my 30-Day Machine Learning & Deep Learning Challenge repository! This repository is a structured learning journey designed to take you from foundational concepts to advanced deep learning architectures in just 30 days. The goal is not only to understand the theory but also to implement every concept practically in Python, and be able to explain it clearlyβ€”even to a beginner.


🎯 Purpose

The purpose of this repository is to provide a complete, hands-on guide to Machine Learning (ML) and Deep Learning (DL) for learners of all levels. By following this challenge, you will:

  • Understand the fundamental mathematics behind ML/DL concepts.
  • Learn how to implement core algorithms from scratch in Python.
  • Build a strong intuition for concepts through analogies and examples.
  • Practice explaining topics in your own words to ensure deep comprehension.
  • Have a reference that can be revisited, modified, and shared with others for educational purposes.

This repository is designed with volunteering and learning in mind, so anyone can follow along, experiment, and improve their ML/DL skills.


πŸ“š Repository Structure

The repository is organized into 30 folders, one for each day of the challenge. Each folder contains:

  • day_<number>_<topic>
    Example: day_01_loss_function

Inside each folder, you will find:

  1. Python script (.py)
    • Contains full working examples of the topic.
    • Includes step-by-step implementation, code comments, and outputs.
  2. ReadMe.md (optional for each day)
    • Explains the topic in plain English.
    • Includes analogies, mathematical explanations, and mini-exercises.
    • Designed so you could explain the concept to a 10-year-old after studying.

πŸ—‚ Example Folder Structure

ml_fundamentals_challenge/
β”œβ”€β”€ day_01_loss_function/
β”‚   β”œβ”€β”€ loss_function.py
β”‚   └── README.md
β”œβ”€β”€ day_02_gradient_descent/
β”‚   β”œβ”€β”€ gradient_descent.py
β”‚   └── README.md
β”œβ”€β”€ day_03_regularization/
β”‚   β”œβ”€β”€ regularization.py
β”‚   └── README.md
...
β”œβ”€β”€ day_30_final_project/
β”‚   β”œβ”€β”€ final_project.py
β”‚   └── README.md
└── README.md

πŸ”‘ Learning Approach

Each day follows a clear and structured path:

  1. Theory
    • Full explanation of the concept with formulas.
    • Analogies to make abstract ideas intuitive.
  2. Practical Implementation
    • Python code with detailed comments.
    • Example outputs to see results in action.
  3. Verification & Reflection
    • Mini exercises to reinforce learning.
    • Encouragement to explain the topic to others to solidify understanding.

πŸ“ˆ 30-Day Learning Roadmap

Week Days Focus Area Key Topics
Week 1 Days 1-7 Deep Understanding of Loss & Gradient Descent Loss Functions, Gradient Descent, Learning Rate, Momentum, Regularization
Week 2 Days 8-14 ML Foundations Linear/Logistic Regression, Metrics, Decision Trees, Ensembles, Feature Engineering
Week 3 Days 15-21 Deep Learning Fundamentals Perceptron, Neural Networks, Forward/Backward Propagation, Optimization
Week 4 Days 22-30 Advanced Deep Learning CNNs, RNNs, LSTM, Transformers, Final Project

Detailed Day-by-Day Plan

Week 1: Deep Understanding of Loss & Gradient Descent

Goal: Understand how models learn from the inside β€” loss, gradients, optimization steps.

Day Topic Theory Practice Goal 1-Minute LinkedIn Video
Day 1 Loss Function Mathematics MSE, Cross-Entropy, formulas, meaning Implement MSE and Cross-Entropy with numpy Explain loss functions to a 10-year-old https://www.linkedin.com/posts/serhii-kravchenko1_ai-ml-machinelearning-activity-7362854560731734016-pKAE/
Day 2 Introduction to Gradient Descent Derivative as direction of smallest change Implement gradient descent for one variable Understand "rolling down the hill" analogy https://www.linkedin.com/posts/serhii-kravchenko1_ai-ml-machinelearning-activity-7367575410915565568-1euG/
Day 3 Multidimensional Gradient Descent Gradients for vectors and matrices Implement gradient descent for linear regression Calculate gradient step manually https://www.linkedin.com/posts/serhii-kravchenko1_ai-ml-dl-activity-7368267543695740930-X2Wh/
Day 4 Learning Rate, Momentum, RMSProp Why step size regulation matters Add momentum to gradient descent Master optimization techniques https://www.linkedin.com/posts/serhii-kravchenko1_ai-ml-machinelearning-activity-7368644528934797313-AKas/
Day 5 Regularization L1, L2, Elastic Net, Dropout Add L2 regularization to linear regression Understand "penalty for complexity" https://www.linkedin.com/posts/serhii-kravchenko1_ai-machinelearning-deeplearning-activity-7369021320895963137-Tdqd/
Day 6 Practice: Gradient Descent + Regularization Combine concepts Build model on synthetic data Experiment with hyperparameters https://www.linkedin.com/posts/serhii-kravchenko1_ai-ml-artificialintelligence-activity-7369390120652808194-B7w8/
Day 7 Week 1 Explanation Review and solidify Explain all concepts in your own words Deep comprehension check https://www.linkedin.com/posts/serhii-kravchenko1_ai-artificialintelligence-machinelearning-activity-7369739519358672896-Ac2G/

Week 2: ML Foundations

Goal: Build foundation for classical algorithms.

Day Topic Theory Practice Goal 1-Minute LinkedIn Video
Day 8 Linear Regression Formulas, MSE, gradient descent vs normal equation Linear regression on Boston dataset Master linear relationships https://www.linkedin.com/posts/serhii-kravchenko1_ai-machinelearning-deeplearning-activity-7370093692860395520-1TJZ/
Day 9 Logistic Regression Sigmoid, cross-entropy, gradient descent Implement from scratch in Python Understand classification basics https://www.linkedin.com/posts/serhii-kravchenko1_ai-machinelearning-deeplearning-activity-7371156573056028672-r71Y/
Day 10 Classification Metrics Accuracy, Precision, Recall, F1-score, ROC-AUC Apply sklearn on simple classification Evaluate model performance https://www.linkedin.com/posts/serhii-kravchenko1_happy-day-10-of-our-ml-dl-challenge-activity-7371543490503053312-w63y/
Day 11 Decision Trees Space partitioning, entropy, Gini Build decision tree with sklearn Understand tree-based decisions https://www.linkedin.com/posts/serhii-kravchenko1_day-11-of-our-ml-challenge-decision-trees-activity-7373332438728589312-Xn8m/
Day 12 Ensembles: Random Forest, Gradient Boosting Bagging vs Boosting concepts Apply Random Forest on dataset Master ensemble methods https://www.linkedin.com/posts/serhii-kravchenko1_day-12-of-our-mldl-challenge-mastering-activity-7374071778450788352-5B7j/
Day 13 Feature Engineering & Scaling Normalization, standardization, one-hot encoding Preprocess real dataset Prepare data for models https://www.linkedin.com/posts/serhii-kravchenko1_day-13-of-our-ml-challenge-feature-engineering-activity-7374784863599599616-V38G/
Day 14 Week 2 Explanation Review ML algorithms and preprocessing Explain all concepts in your own words Solidify ML fundamentals https://www.linkedin.com/posts/serhii-kravchenko1_day-14-of-our-mldl-challenge-week-2-review-activity-7375539431258370048-qe8a/

Week 3: Deep Learning Fundamentals

Goal: Understand neural network structure and backpropagation.

Day Topic Theory Practice Goal 1-Minute LinkedIn Video
Day 15 Perceptron Formula, linear combination, activation Implement single perceptron Understand "decision with weighted brain" https://www.linkedin.com/posts/serhii-kravchenko1_day-15-of-our-mldl-challenge-perceptron-activity-7376610160917553152-L33t/
Day 16 Activation Functions Sigmoid, ReLU, Tanh, LeakyReLU Visualize functions on graphs Master non-linearity https://www.linkedin.com/posts/serhii-kravchenko1_day-16-of-our-deep-learning-challenge-activity-7378428512896065536-AwkL/
Day 17 Forward Propagation Signal flow from input to output Implement forward propagation with numpy Understand information flow https://www.linkedin.com/posts/serhii-kravchenko1_day-17-of-the-30-day-mldl-challenge-activity-7379484342286458880--qzJ/
Day 18 Backpropagation Chain rule, gradients for each layer Calculate gradients manually for one layer Master "backward wave for corrections" https://www.linkedin.com/posts/serhii-kravchenko1_%F0%9D%90%83%F0%9D%90%9A%F0%9D%90%B2-%F0%9D%9F%8F%F0%9D%9F%96-%F0%9D%90%81%F0%9D%90%9A%F0%9D%90%9C%F0%9D%90%A4%F0%9D%90%A9%F0%9D%90%AB%F0%9D%90%A8%F0%9D%90%A9%F0%9D%90%9A%F0%9D%90%A0%F0%9D%90%9A%F0%9D%90%AD-activity-7383473254843297792-zN_l/
Day 19 Optimization SGD, Momentum, Adam Implement Adam for simple NN Advanced optimization techniques https://www.linkedin.com/posts/serhii-kravchenko1_day-19-of-our-mldl-challenge-%F0%9D%90%8D%F0%9D%90%9E%F0%9D%90%AE%F0%9D%90%AB%F0%9D%90%9A%F0%9D%90%A5-activity-7386012096595066880-zyyT/
Day 20 Practice: NN on MNIST Combine all concepts Build 1-2 layer NN with numpy End-to-end neural network https://www.linkedin.com/posts/serhii-kravchenko1_%F0%9D%90%83%F0%9D%90%9A%F0%9D%90%B2-20-%F0%9D%90%A8%F0%9D%90%9F-%F0%9D%90%A8%F0%9D%90%AE%F0%9D%90%AB-%F0%9D%90%8C%F0%9D%90%8B%F0%9D%90%83%F0%9D%90%8B-%F0%9D%90%82%F0%9D%90%A1%F0%9D%90%9A-activity-7387467264029265920-yAJZ/
Day 21 Week 3 Explanation Review NN, forward/backward propagation Explain all concepts in your own words Deep learning comprehension https://www.linkedin.com/posts/serhii-kravchenko1_%F0%9D%90%83%F0%9D%90%9A%F0%9D%90%B2-21-%F0%9D%90%96%F0%9D%90%9E%F0%9D%90%9E%F0%9D%90%A4-3-%F0%9D%90%92%F0%9D%90%AE%F0%9D%90%A6%F0%9D%90%A6%F0%9D%90%9A%F0%9D%90%AB%F0%9D%90%B2-%F0%9D%90%A8%F0%9D%90%9F-activity-7388570578460364800-YxPp/

Week 4: Advanced DL & Modern Architectures

Goal: Understanding CNN, RNN, LSTM, Transformer, Attention.

Day Topic Theory Practice Goal 1-Minute LinkedIn Video
Day 22 CNN Basics Convolution, Pooling, Flatten Simple CNN on MNIST with PyTorch/Keras Understand spatial processing https://www.linkedin.com/posts/serhii-kravchenko1_%F0%9D%90%83%F0%9D%90%9A%F0%9D%90%B2-22-%F0%9D%90%A8%F0%9D%90%9F-%F0%9D%90%A8%F0%9D%90%AE%F0%9D%90%AB-%F0%9D%90%83%F0%9D%90%8B%F0%9D%90%8C%F0%9D%90%8B-%F0%9D%90%82%F0%9D%90%A1%F0%9D%90%9A-activity-7390046464795906050-PK25/
Day 23 Advanced CNN Padding, stride, filter size Visualize feature maps Master convolutional operations https://www.linkedin.com/posts/serhii-kravchenko1_%F0%9D%90%83%F0%9D%90%9A%F0%9D%90%B2-23-%F0%9D%90%A8%F0%9D%90%9F-%F0%9D%90%AD%F0%9D%90%A1%F0%9D%90%9E-30-%F0%9D%90%83%F0%9D%90%9A%F0%9D%90%B2-%F0%9D%90%83%F0%9D%90%8B%F0%9D%90%8C%F0%9D%90%8B-activity-7392570159795941376-lNaV/
Day 24 RNN Basics Sequential data, hidden state Simple RNN on synthetic sequence Process sequential information https://www.linkedin.com/posts/serhii-kravchenko1_%F0%9D%90%83%F0%9D%90%9A%F0%9D%90%B2-24-%F0%9D%90%A8%F0%9D%90%9F-%F0%9D%90%A8%F0%9D%90%AE%F0%9D%90%AB-%F0%9D%90%8C%F0%9D%90%8B%F0%9D%90%83%F0%9D%90%8B-%F0%9D%90%82%F0%9D%90%A1%F0%9D%90%9A-activity-7394390996068806656-hIT1/
Day 25 LSTM/GRU Gates, memory cell Implement LSTM with PyTorch/Keras Handle long-term dependencies https://www.linkedin.com/posts/serhii-kravchenko1_%F0%9D%90%83%F0%9D%90%9A%F0%9D%90%B2-25-%F0%9D%90%A8%F0%9D%90%9F-%F0%9D%90%A8%F0%9D%90%AE%F0%9D%90%AB-%F0%9D%90%83%F0%9D%90%8B%F0%9D%90%8C%F0%9D%90%8B-%F0%9D%90%82%F0%9D%90%A1%F0%9D%90%9A-activity-7396231676575662080-m949/
Day 26 Attention & Transformer Self-attention, query/key/value Simple transformer scheme "Each word looks at others for context" https://www.linkedin.com/posts/serhii-kravchenko1_%F0%9D%90%83%F0%9D%90%9A%F0%9D%90%B2-26-%F0%9D%90%A8%F0%9D%90%9F-%F0%9D%90%A8%F0%9D%90%AE%F0%9D%90%AB-%F0%9D%90%8C%F0%9D%90%8B%F0%9D%90%83%F0%9D%90%8B-%F0%9D%90%82%F0%9D%90%A1%F0%9D%90%9A-activity-7399468266399621120-UAH3/
Day 27 Optimization Tricks BatchNorm, Dropout, LR scheduler, Gradient clipping Add to CNN/RNN Master training techniques https://www.linkedin.com/posts/serhii-kravchenko1_%F0%9D%90%83%F0%9D%90%9A%F0%9D%90%B2-27-%F0%9D%90%A8%F0%9D%90%9F-%F0%9D%90%A8%F0%9D%90%AE%F0%9D%90%AB-30-%F0%9D%90%9D%F0%9D%90%9A%F0%9D%90%B2-%F0%9D%90%8C%F0%9D%90%8B%F0%9D%90%83%F0%9D%90%8B-activity-7402384778739478528-wtW1/
Day 28 Week 4 Summary Summary of CNN, Advanced CNN, RNN, LSTM/GRU, Attention, Transformer, Optimization Tricks Write a structured recap of all concepts learned during Week 4 Solidify understanding of all modern DL architectures https://www.linkedin.com/posts/serhii-kravchenko1_github-serhii2009mlfundamentalschallenge-activity-7404915088186585088-Pjj0/
Day 29 Project: Emotion Detection CNN for facial emotion recognition; data preprocessing; classification pipeline Build an Emotion Detection CNN using FER2013 (or similar dataset): train, evaluate, visualize metrics Apply deep learning to a real-world vision task https://www.linkedin.com/posts/serhii-kravchenko1_%F0%9D%90%83%F0%9D%90%9A%F0%9D%90%B2-29-%F0%9D%90%A8%F0%9D%90%9F-%F0%9D%90%A8%F0%9D%90%AE%F0%9D%90%AB-30-%F0%9D%90%9D%F0%9D%90%9A%F0%9D%90%B2-%F0%9D%90%8C%F0%9D%90%8B%F0%9D%90%83%F0%9D%90%8B-activity-7407085094924042240-GuEn/
Day 30 Celebration & Reflection A meaningful wrap-up of the 30-day journey β€” growth, lessons, mindset, and what comes next Write a reflective README + record a final video to celebrate the journey Celebrate the journey & mark the beginning of what's next https://www.linkedin.com/posts/serhii-kravchenko1_%F0%9D%90%83%F0%9D%90%9A%F0%9D%90%B2-30-%F0%9D%90%8E%F0%9D%90%9F%F0%9D%90%9F%F0%9D%90%A2%F0%9D%90%9C%F0%9D%90%A2%F0%9D%90%9A%F0%9D%90%A5%F0%9D%90%A5%F0%9D%90%B2-%F0%9D%90%8E%F0%9D%90%AE%F0%9D%90%AB-activity-7410329423587049472-3FAk/

🧩 How to Use This Repository

  1. Clone the repository:
git clone https://github.com/Serhii2009/ml_fundamentals_challenge
cd ml_fundamentals_challenge
  1. Go through each folder day by day:
cd day_01_loss_function
# Open Python script and study it
python loss_function.py
  1. Read the README.md in each folder for explanations, analogies, and exercises.

  2. Practice by modifying the code, experimenting with parameters, and solving exercises.

  3. Explain each topic in your own words (even to a 10-year-old!)β€”this is a key step for deep understanding.


πŸ’‘ Notes

  • All code is written in Python 3, using NumPy, pandas, scikit-learn, and PyTorch/Keras for deep learning examples.
  • Each day builds on the previous, so it's recommended to follow the sequence from Day 1 to Day 30.
  • This repository is designed for self-learning, teaching, and collaboration.

πŸŽ“ Outcome

By completing this 30-day challenge:

  • You will have a solid understanding of ML and DL fundamentals.
  • You will be able to implement algorithms from scratch and understand their inner workings.
  • You will gain confidence to explain concepts clearly to others.
  • You will have a structured portfolio of practical ML/DL projects.

Learning by doing, reflecting, and teaching is the fastest path to mastering Machine Learning and Deep Learning.

If you follow this repository day by day, and truly practice each topic, you will understand the math, the code, and the intuition behind every core concept.


πŸ“ License

This repository is licensed under the MIT License – see the LICENSE file for details.