A prompt injection and jailbreak detection system for LLMs
PromptScreen is an open-source library that provides multiple defense layers against prompt injection attacks and jailbreak attempts in LLM applications. Designed for production use, it offers plug-and-play guards that can be integrated into any LLM pipeline.
We're excited to announce that PromptScreen is now available via pip:
pip install promptscreenfrom promptscreen import HeuristicVectorAnalyzer
guard = HeuristicVectorAnalyzer(threshold=2, pm_shot_lim=3)
result = guard.analyse("Your prompt here")
if result.get_verdict():
print("✓ Safe prompt")
else:
print(f"✗ Blocked: {result.get_type()}")# Core package (fast guards only)
pip install promptscreen
# With ML guards (ShieldGemma, ClassifierCluster)
pip install promptscreen[ml]
# With vector database guard
pip install promptscreen[vectordb]
# Everything
pip install promptscreen[all]- HeuristicVectorAnalyzer - Fast pattern-based detection
- Scanner (YARA) - Bundled YARA rules
- InjectionScanner - Command injection detection
- JailbreakInferenceAPI (SVM) - ML classifier
- VectorDBScanner - Similarity search (optional)
- ClassifierCluster - Dual ML models (optional)
- ShieldGemma - Google's safety model (optional)
- PyPI: https://pypi.org/project/promptscreen/
- GitHub: https://github.com/dronefreak/PromptScreen
- Issues: https://github.com/dronefreak/PromptScreen/issues
As always Hare Krishna and happy coding!