A research-focused simulation system that uses Large Language Models (LLMs) to autonomously orchestrate and simulate DDoS attacks. This project demonstrates how AI agents can coordinate observation, analysis, and attacks in a distributed architecture.
- Commander Node: Observes the target, analyzes network status using an LLM (Mistral), and issues strategy commands via Redis pub/sub.
- Ninja Nodes: Subscribe to the commander’s channel, receive attack instructions, and execute simulated DDoS attacks (e.g., SYN flood, HTTP flood, Slowloris) using external tools or internal scripts.
- Target: A mock or live endpoint being monitored and attacked for research and testing purposes.
C:.
│ docker-compose.yml
│ ninja_1.log
│ output.txt
│ README.md
│ Slides.pdf
│ ui_console.log
│
├── attack
│ ├── http_flood.py
│ ├── slowloris.py
│ ├── tcp_flood.py
│ └── __pycache__
│ ├── http_flood.cpython-312.pyc
│ ├── slowloris.cpython-312.pyc
│ └── tcp_flood.cpython-312.pyc
│
├── images
│ └── ollama_output.png
│
├── infra
│ ├── monitor.py
│ ├── pubsub.py
│ └── __pycache__
│ ├── monitor.cpython-312.pyc
│ └── pubsub.cpython-312.pyc
│
├── llm
│ ├── prompt_templates.py
│ ├── finetune_config
│ └── __pycache__
│ └── prompt_templates.cpython-312.pyc
│
├── log
│ ├── commander_1.log
│ ├── console.log
│ └── ninja_1a.log
│
├── nodes
│ ├── commander.py
│ ├── commander_ai.py
│ └── ninja.py
│
├── scripts
│ └── fake_traffic.py
│
└── ui
├── control_panel.py
└── print_log.py
- Python 3.8+
- Docker (for Redis)
- Ollama with Mistral model loaded(or any LLM model depends on your capacity and configurtion)
git clone https://github.com/williamq96/LLM-DDOS.git cd LLM-DDOS
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
docker compose up -d
python nodes/commander_ai.py --name commander-1 --channel commander-1-channel
python nodes/ninja_node.py --name ninja-1 --channel commander-1-channel
You can run multiple Ninja nodes concurrently to simulate a distributed botnet.
Change channel names and loop intervals in the scripts as needed.
Modify llm/prompt_templates.py to adjust system prompts for the Commander.
This project is intended for educational and research purposes only. Do not use it to perform unauthorized attacks on real systems. Always test in controlled environments.
MIT License