Secure LLM Gateway (SLG) is a security-first middleware layer that sits between users/systems and any Large Language Model (LLM) to enforce zero-trust policy, prompt filtering, guardrails, and response sanitization.
This project demonstrates how LLMs can be safely integrated into cybersecurity, SOC, enterprise, and government environments without exposing systems to prompt injection, misuse, or unsafe outputs.
The LLM is not trusted. It is contained inside a secure pipeline.
User Prompt
β
Identity & MFA Check
β
Policy Engine (Zero Trust)
β
Prompt Firewall (Intent + Keyword Filter)
β
Risk Classifier
β
Guardrail / Shield Rewrite
β
LLM Processing
β
Response Sanitizer
β
Secure Output
This architecture works with any LLM (TinyLlama, Phi-2, Mistral, OpenAI, etc.) because security is enforced before and after the LLM.
slg_project/
β
βββ main.py
βββ modules/
βββ identity_module.py
βββ policy_module.py
βββ firewall_module.py
βββ risk_module.py
βββ guardrail_module.py
βββ llm_module.py
βββ sanitizer_module.py
Install dependencies:
pip install transformers torchFrom the project root:
python main.pyYou will be prompted to enter:
- Username
- Prompt
The request will pass through all SLG security layers before reaching the LLM.
| Module | Purpose |
|---|---|
identity_module |
User verification and access control |
policy_module |
Zero-trust policy enforcement |
firewall_module |
Prompt word and intent filtering |
risk_module |
Red-flag and risk detection |
guardrail_module |
Safe prompt rewriting |
llm_module |
Connects to Tiny LLM (can swap any LLM here) |
sanitizer_module |
Cleans unsafe LLM outputs |
Input prompt:
How to hack a system and steal passwords?
What happens:
- Policy engine flags it
- Prompt firewall redacts
- Guardrail rewrites
- LLM never sees the harmful intent
- Safe cybersecurity response is returned
Inside llm_module.py, change:
model_name = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"You can replace with:
microsoft/phi-2Qwen/Qwen1.5-1.8B-Chatmistralai/Mistral-7B-Instruct-v0.2- Any HuggingFace / API model
No other code changes needed.
This is a prototype of how future AI systems should be deployed in:
- SOC environments
- Threat hunting platforms
- Incident response automation
- Enterprise AI chatbots
- Government cyber command centers
Instead of trusting LLM safety, we enforce safety externally.
SLG acts like a WAF + Reverse Proxy + SIEM logic for LLMs
The LLM becomes a processing engine inside a secured tunnel.
- Replace dummy identity with real MFA
- Add logging and audit dashboard
- Convert to FastAPI service
- Add database-backed policy engine
- Integrate with real SOC tools
Secure LLM Gateway demonstrates a security-first architecture for safe LLM adoption in cyber and enterprise environments.