LLM that follows the script. No hallucinations. No improvisation.
A constrained dialog agent framework that ensures LLM-powered chatbots only say what they are allowed to say. Designed for business use cases where accuracy matters: e-commerce, customer support, surveys, HR bots.
LLMs hallucinate. For business chatbots this is a disaster:
- E-commerce: bot promises a discount that doesn't exist
- Support: bot invents non-existent product features
- Surveys: bot goes off-script and fails to collect required data
- Consulting: bot gives incorrect information about services
Prompt engineering, RAG, and fine-tuning help but don't guarantee correctness.
A dialog engine with strict flow control:
User Input -> Intent Detection -> State Machine -> LLM Generation -> Validator -> Output
| |
Knowledge Base Block/Rephrase
- Scenario -- user defines dialog steps as a state machine (YAML)
- Knowledge Base -- only these facts can be used in responses
- Validator -- checks every response before sending
- Fallback -- if LLM tries to go off-script, a safe response is returned
- YAML-based scenario definitions (FSM)
- Pluggable knowledge base with fact validation
- Output validation with hallucination detection
- Multi-provider LLM support: Ollama (local), OpenAI, Anthropic
- Async-first architecture
- Plugin system for integrations (Telegram, REST API, webhooks)
- CLI for testing scenarios
- Python 3.11+
- Ollama (recommended for local use) or an API key for OpenAI/Anthropic
git clone https://github.com/yourusername/scriptedllm.git
cd scriptedllm
pip install -e ".[dev]"ollama pull llama3.2scriptedllm run examples/shop/scenario.yamlOr use the Python API:
from scriptedllm import ScriptedEngine
engine = ScriptedEngine.from_yaml("examples/shop/scenario.yaml")
response = await engine.process(user_id="user1", text="What laptops do you have?")
print(response.text)Scenarios are defined in YAML:
name: "Shop Assistant"
initial_state: greeting
llm:
provider: ollama
model: llama3.2
states:
greeting:
message: "Hello! I'm the assistant for {shop_name}. How can I help?"
transitions:
- intent: ask_product
target: product_search
- intent: ask_delivery
target: delivery_info
- intent: other
target: fallback
product_search:
action: search_knowledge_base
message_template: "Here's what I found: {results}"
allowed_facts: [products, categories, specs]
transitions:
- intent: ask_price
target: price_check
- intent: back
target: greeting
fallback:
message: "I can only help with products, prices, and delivery. Want to talk to a human?"
constraints:
- never_promise_discounts_not_in_kb
- never_discuss_competitors
- never_make_up_product_featuresproducts:
- id: "SKU001"
name: "Laptop XYZ Pro"
price: 89990
specs:
cpu: "Intel i7-12700H"
ram: "16GB"
storage: "512GB SSD"
in_stock: true
delivery:
default: "3-7 days"
express: "1-2 days"
free_from: 5000scriptedllm/
├── src/scriptedllm/
│ ├── core/
│ │ ├── engine.py -- main orchestrator
│ │ ├── fsm.py -- finite state machine
│ │ ├── llm.py -- LLM providers (Ollama, OpenAI, Anthropic)
│ │ └── validator.py -- output validation
│ ├── knowledge/
│ │ ├── base.py -- knowledge base interface
│ │ └── loader.py -- YAML/JSON loader
│ ├── plugins/
│ │ └── base.py -- plugin interface
│ └── config/
│ └── loader.py -- configuration loading
├── examples/
│ └── shop/
├── tests/
└── pyproject.toml
| Provider | Local | API Key | Notes |
|---|---|---|---|
| Ollama | Yes | No | Recommended for small business, free |
| OpenAI | No | Yes | GPT-4o, GPT-4o-mini |
| Anthropic | No | Yes | Claude Sonnet, Haiku |
MIT