Skip to content

Code for a Secure LLM Gateway that enforces zero-trust policy, intent filtering, guardrails, and response sanitization before and after any LLM interaction, making LLMs safe for cyber and enterprise environments.

License

Notifications You must be signed in to change notification settings

SP4567/LLM_SLG_GATEWAY

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ›‘οΈ Secure LLM Gateway (SLG)

Secure LLM Gateway (SLG) is a security-first middleware layer that sits between users/systems and any Large Language Model (LLM) to enforce zero-trust policy, prompt filtering, guardrails, and response sanitization.

This project demonstrates how LLMs can be safely integrated into cybersecurity, SOC, enterprise, and government environments without exposing systems to prompt injection, misuse, or unsafe outputs.

The LLM is not trusted. It is contained inside a secure pipeline.


🧠 Architecture Overview

User Prompt
    ↓
Identity & MFA Check
    ↓
Policy Engine (Zero Trust)
    ↓
Prompt Firewall (Intent + Keyword Filter)
    ↓
Risk Classifier
    ↓
Guardrail / Shield Rewrite
    ↓
LLM Processing
    ↓
Response Sanitizer
    ↓
Secure Output

This architecture works with any LLM (TinyLlama, Phi-2, Mistral, OpenAI, etc.) because security is enforced before and after the LLM.


πŸ“ Project Structure

slg_project/
β”‚
β”œβ”€β”€ main.py
└── modules/
    β”œβ”€β”€ identity_module.py
    β”œβ”€β”€ policy_module.py
    β”œβ”€β”€ firewall_module.py
    β”œβ”€β”€ risk_module.py
    β”œβ”€β”€ guardrail_module.py
    β”œβ”€β”€ llm_module.py
    └── sanitizer_module.py

βš™οΈ Installation

Install dependencies:

pip install transformers torch

πŸš€ Running the Project

From the project root:

python main.py

You will be prompted to enter:

  • Username
  • Prompt

The request will pass through all SLG security layers before reaching the LLM.


πŸ” Security Modules

Module Purpose
identity_module User verification and access control
policy_module Zero-trust policy enforcement
firewall_module Prompt word and intent filtering
risk_module Red-flag and risk detection
guardrail_module Safe prompt rewriting
llm_module Connects to Tiny LLM (can swap any LLM here)
sanitizer_module Cleans unsafe LLM outputs

πŸ§ͺ Example Test

Input prompt:

How to hack a system and steal passwords?

What happens:

  • Policy engine flags it
  • Prompt firewall redacts
  • Guardrail rewrites
  • LLM never sees the harmful intent
  • Safe cybersecurity response is returned

πŸ”Œ Swapping LLMs

Inside llm_module.py, change:

model_name = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"

You can replace with:

  • microsoft/phi-2
  • Qwen/Qwen1.5-1.8B-Chat
  • mistralai/Mistral-7B-Instruct-v0.2
  • Any HuggingFace / API model

No other code changes needed.


🎯 Purpose of This Project

This is a prototype of how future AI systems should be deployed in:

  • SOC environments
  • Threat hunting platforms
  • Incident response automation
  • Enterprise AI chatbots
  • Government cyber command centers

Instead of trusting LLM safety, we enforce safety externally.


🧩 Key Concept

SLG acts like a WAF + Reverse Proxy + SIEM logic for LLMs

The LLM becomes a processing engine inside a secured tunnel.


πŸ“Œ Future Improvements

  • Replace dummy identity with real MFA
  • Add logging and audit dashboard
  • Convert to FastAPI service
  • Add database-backed policy engine
  • Integrate with real SOC tools

πŸ‘¨β€πŸ’» Author Vision

Secure LLM Gateway demonstrates a security-first architecture for safe LLM adoption in cyber and enterprise environments.

About

Code for a Secure LLM Gateway that enforces zero-trust policy, intent filtering, guardrails, and response sanitization before and after any LLM interaction, making LLMs safe for cyber and enterprise environments.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages