Skip to content

A constrained dialog agent framework that ensures LLM-powered chatbots only say what they are allowed to say. Designed for business use cases where accuracy matters: e-commerce, customer support, surveys, HR bots.

License

Notifications You must be signed in to change notification settings

gotogrub/ScriptedLLM

Repository files navigation

ScriptedLLM

LLM that follows the script. No hallucinations. No improvisation.

A constrained dialog agent framework that ensures LLM-powered chatbots only say what they are allowed to say. Designed for business use cases where accuracy matters: e-commerce, customer support, surveys, HR bots.


The Problem

LLMs hallucinate. For business chatbots this is a disaster:

  • E-commerce: bot promises a discount that doesn't exist
  • Support: bot invents non-existent product features
  • Surveys: bot goes off-script and fails to collect required data
  • Consulting: bot gives incorrect information about services

Prompt engineering, RAG, and fine-tuning help but don't guarantee correctness.

The Solution

A dialog engine with strict flow control:

User Input -> Intent Detection -> State Machine -> LLM Generation -> Validator -> Output
                                       |                                |
                                 Knowledge Base                    Block/Rephrase
  1. Scenario -- user defines dialog steps as a state machine (YAML)
  2. Knowledge Base -- only these facts can be used in responses
  3. Validator -- checks every response before sending
  4. Fallback -- if LLM tries to go off-script, a safe response is returned

Features

  • YAML-based scenario definitions (FSM)
  • Pluggable knowledge base with fact validation
  • Output validation with hallucination detection
  • Multi-provider LLM support: Ollama (local), OpenAI, Anthropic
  • Async-first architecture
  • Plugin system for integrations (Telegram, REST API, webhooks)
  • CLI for testing scenarios

Quick Start

Requirements

  • Python 3.11+
  • Ollama (recommended for local use) or an API key for OpenAI/Anthropic

Installation

git clone https://github.com/yourusername/scriptedllm.git
cd scriptedllm
pip install -e ".[dev]"

Pull a model (Ollama)

ollama pull llama3.2

Run the example

scriptedllm run examples/shop/scenario.yaml

Or use the Python API:

from scriptedllm import ScriptedEngine

engine = ScriptedEngine.from_yaml("examples/shop/scenario.yaml")

response = await engine.process(user_id="user1", text="What laptops do you have?")
print(response.text)

Scenario DSL

Scenarios are defined in YAML:

name: "Shop Assistant"
initial_state: greeting

llm:
  provider: ollama
  model: llama3.2

states:
  greeting:
    message: "Hello! I'm the assistant for {shop_name}. How can I help?"
    transitions:
      - intent: ask_product
        target: product_search
      - intent: ask_delivery
        target: delivery_info
      - intent: other
        target: fallback

  product_search:
    action: search_knowledge_base
    message_template: "Here's what I found: {results}"
    allowed_facts: [products, categories, specs]
    transitions:
      - intent: ask_price
        target: price_check
      - intent: back
        target: greeting

  fallback:
    message: "I can only help with products, prices, and delivery. Want to talk to a human?"

constraints:
  - never_promise_discounts_not_in_kb
  - never_discuss_competitors
  - never_make_up_product_features

Knowledge Base

products:
  - id: "SKU001"
    name: "Laptop XYZ Pro"
    price: 89990
    specs:
      cpu: "Intel i7-12700H"
      ram: "16GB"
      storage: "512GB SSD"
    in_stock: true

delivery:
  default: "3-7 days"
  express: "1-2 days"
  free_from: 5000

Architecture

scriptedllm/
├── src/scriptedllm/
│   ├── core/
│   │   ├── engine.py        -- main orchestrator
│   │   ├── fsm.py           -- finite state machine
│   │   ├── llm.py           -- LLM providers (Ollama, OpenAI, Anthropic)
│   │   └── validator.py     -- output validation
│   ├── knowledge/
│   │   ├── base.py          -- knowledge base interface
│   │   └── loader.py        -- YAML/JSON loader
│   ├── plugins/
│   │   └── base.py          -- plugin interface
│   └── config/
│       └── loader.py        -- configuration loading
├── examples/
│   └── shop/
├── tests/
└── pyproject.toml

LLM Providers

Provider Local API Key Notes
Ollama Yes No Recommended for small business, free
OpenAI No Yes GPT-4o, GPT-4o-mini
Anthropic No Yes Claude Sonnet, Haiku

License

MIT

About

A constrained dialog agent framework that ensures LLM-powered chatbots only say what they are allowed to say. Designed for business use cases where accuracy matters: e-commerce, customer support, surveys, HR bots.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages