A secure low code honeypot framework, leveraging AI for System Virtualization.
-
Updated
Jan 26, 2026 - Go
A secure low code honeypot framework, leveraging AI for System Virtualization.
AI Security Platform: Defense (227 engines) + Offense (39K+ payloads) | 🎓 Academy: 159 lessons + 8 labs | RLM-Toolkit | OWASP LLM/ASI Top 10 | Red Team toolkit for AI
🤖 Test and secure AI systems with advanced techniques for Large Language Models, including jailbreaks and automated vulnerability scanners.
Formal safety framework for AI agents. Pluggable LLM reasoning constrained by mathematically proven budget, invariant, and termination guarantee. 7 theorems enforced by construction, not by prompting. Includes Bayesian belief tracking, causal dependency graphs, sandboxed attestors, environment reconciliation, and a 155-test adversarial suite.
An experiment in backdooring a shell safety classifier by planting a hidden trigger in its training data.
TypeScript/JavaScript SDK for AI Agent Security - Drop-in security for LangChain, CrewAI, AutoGPT and custom agents
Add a description, image, and links to the agentic-ai-security topic page so that developers can more easily learn about it.
To associate your repository with the agentic-ai-security topic, visit your repo's landing page and select "manage topics."