A comprehensive collection of hands-on labs and resources for learning AI/ML security, aligned with the MITRE ATLAS adversarial threat framework.
ai-security/
βββ labs/ # Hands-on security labs
β βββ lab-01-supply-chain-attack/
β βββ lab-02-model-stealing/
β βββ lab-03-llm-agent-exploitation/
β βββ lab-04-rag-data-extraction/
β βββ lab-05-malicious-code-injection/
β βββ lab-06-model-signing/
βββ README.md # This file
| Lab | Topic | MITRE ATLAS Techniques |
|---|---|---|
| Lab 01 | HuggingFace Supply Chain Attack | AML.T0010, AML.T0011 |
| Lab 02 | Model Stealing via API | AML.T0044, AML.T0024 |
| Lab 03 | LLM Agent Exploitation | AML.T0051, AML.T0043 |
| Lab 04 | RAG Data Extraction | AML.T0051 |
| Lab 05 | Malicious Code Injection | AML.T0010, AML.T0011 |
| Lab 06 | Model Signing & Integrity | AML.T0010, AML.T0011 |
# Clone repository
git clone <repo-url>
cd ai-security/labs
# Start with Lab 01
cd lab-01-supply-chain-attack
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txtThis repository is for EDUCATIONAL and RESEARCH purposes only.
Do not use any code, techniques, or materials for malicious activities. The author assumes no liability for misuse.
GopeshK