A recursive, multi-agent LLM framework for Go. Build hierarchical AI agent systems with tool use, streaming, conversation memory, and a production-ready HTTP server — all with a single recursive config struct.
Built on top of langchaingo — huge thanks to the langchaingo team for making LLM integration in Go possible.
- Recursive Agent Hierarchy — Orchestrator delegates to sub-agents, which can have their own sub-agents. Same config struct at every level.
- Multi-Provider LLM Support — OpenAI, Anthropic (Claude), Google (Gemini), XAI (Grok), Groq, Ollama. Mix providers across agents.
- ReAct Execution Loop — Agents reason, act (call tools), observe results, and iterate until they reach an answer.
- Streaming Events — Real-time
ReactEventstream: iteration starts, LLM thinking, tool calls, final answers. - Built-in VFS Tools — Every agent gets
ls,read_file,write_file,edit_file,grep,globout of the box. - Skills System — Load domain knowledge from Markdown files (with YAML frontmatter) and inject into agent prompts.
- Middleware Pipeline — Pluggable middleware for logging, Anthropic compatibility, todo lists, context summarization, or custom logic.
- Thread-based Conversations — Persistent conversation history with automatic checkpoint chains.
- Agent Protocol HTTP Server — LangGraph Studio compatible REST API with SSE streaming, background runs, cancellation.
- Pluggable Storage — In-memory (default) or MongoDB. Implement the
Storeinterface for your own backend. - Custom Tools — Implement the standard
langchaingo/tools.Toolinterface. Agents discover and call them automatically.
go get github.com/denizumutdereli/go-deepagentpackage main
import (
"context"
"fmt"
"os"
"github.com/denizumutdereli/go-deepagent/pkg/agent"
)
func main() {
app, err := agent.New(agent.AgentConfig{
Name: "assistant",
Model: "gpt-4.1",
Prompt: "You are a helpful assistant.",
}, os.Getenv("OPENAI_API_KEY"))
if err != nil {
panic(err)
}
result, err := app.Process(context.Background(), "What is the capital of France?")
if err != nil {
panic(err)
}
fmt.Println(result)
}┌─────────────────────────────────────────────────┐
│ Your Application │
├─────────────────────────────────────────────────┤
│ pkg/agent │ pkg/server │ pkg/store │
│ Agent Engine │ HTTP Server │ Storage │
│ │ (Agent Proto │ (Memory/Mongo) │
│ - App │ col) │ │
│ - ReAct Loop │ - SSE Stream │ pkg/protocol │
│ - Tools │ - Runs │ Types │
│ - Skills │ - Threads │ │
│ - Middleware │ - Events │ │
└─────────────────────────────────────────────────┘
| Package | Description |
|---|---|
pkg/agent |
Core agent engine — config, ReAct loop, tools, skills, middleware |
pkg/server |
Agent Protocol HTTP server with SSE streaming |
pkg/store |
Storage backends (in-memory, MongoDB) |
pkg/protocol |
Shared types (Thread, Message, Checkpoint, Run) |
The entire system is configured with a single recursive struct:
type AgentConfig struct {
// Identity
Name string // unique name for this agent
Description string // shown to parent agent for routing
// Prompt (required)
Prompt string
// Model — supports "provider:model" format
Provider string // "openai", "anthropic", "google", "xai", "groq", "ollama"
Model string // "gpt-4.1", "anthropic:claude-sonnet-4-20250514", "xai:grok-4-1-fast-reasoning"
APIKey string // falls back to env vars if empty
BaseURL string // custom API endpoint
Temperature float64 // 0.0 - 1.0
// Capabilities
Tools []tools.Tool // langchaingo tool interface
Skills []Skill // domain knowledge definitions
MaxIter int // max ReAct iterations (default: 25)
// Recursive — sub-agents use the SAME struct
SubAgents []AgentConfig
// Infrastructure
Middleware []Middleware
Backend Backend // VFS backend
Store Store // conversation storage
}Specify providers explicitly or use the "provider:model" shorthand:
// Auto-detected as OpenAI
Model: "gpt-4.1"
// Explicit provider
Model: "anthropic:claude-sonnet-4-20250514"
// XAI Grok
Model: "xai:grok-4-1-fast-reasoning"
// Google Gemini
Model: "google:gemini-2.5-flash"
// Ollama (local)
Model: "ollama:llama3",
BaseURL: "http://localhost:11434",API keys are resolved in this order:
AgentConfig.APIKeyfield- Fallback API key passed to
agent.New() - Environment variables:
OPENAI_API_KEY,ANTHROPIC_API_KEY,GOOGLE_API_KEY,XAI_API_KEY,GROQ_API_KEY
Build hierarchical agent systems where the orchestrator automatically routes tasks to specialized sub-agents:
app, err := agent.New(agent.AgentConfig{
Name: "orchestrator",
Model: "gpt-4.1",
Prompt: "Route tasks to the best sub-agent.",
SubAgents: []agent.AgentConfig{
{
Name: "researcher",
Description: "Searches the web for information",
Model: "xai:grok-4-1-fast-reasoning",
APIKey: xaiKey,
Tools: []tools.Tool{searchTool},
Prompt: "You are a web researcher...",
},
{
Name: "coder",
Description: "Writes and analyzes code",
Model: "anthropic:claude-sonnet-4-20250514",
APIKey: anthropicKey,
Prompt: "You are a software engineer...",
},
{
Name: "casual",
Description: "Handles casual conversation",
Model: "gpt-4.1-mini",
Prompt: "You are a friendly assistant...",
},
},
}, openaiKey)The orchestrator automatically gets a task tool that delegates to sub-agents based on their descriptions.
Get real-time visibility into agent execution:
eventCh := make(chan agent.ReactEvent, 100)
go func() {
for evt := range eventCh {
switch evt.Type {
case agent.EventIterationStart:
fmt.Printf("⟳ [%s] iteration %d\n", evt.Agent, evt.Iteration)
case agent.EventLLMResponse:
fmt.Printf("💭 [%s] %s\n", evt.Agent, evt.Content)
case agent.EventToolStart:
fmt.Printf("🔧 [%s] calling %s\n", evt.Agent, evt.ToolName)
case agent.EventToolEnd:
fmt.Printf("✓ [%s] %s done\n", evt.Agent, evt.ToolName)
case agent.EventFinalAnswer:
fmt.Printf("✅ [%s] %s\n", evt.Agent, evt.Content)
}
}
}()
result, err := app.SendWithEvents(ctx, threadID, "Analyze this data", eventCh)
close(eventCh)| Event | Description |
|---|---|
EventIterationStart |
New ReAct iteration beginning |
EventLLMResponse |
LLM thinking/reasoning output (before tool calls) |
EventToolStart |
Tool invocation starting |
EventToolEnd |
Tool invocation completed with result |
EventFinalAnswer |
Agent has reached its final answer |
Maintain conversation history across multiple interactions:
// Create a thread
threadID, err := app.CreateThread(ctx, agent.ThreadConfig{
UserID: "user-123",
})
// Send messages — history is managed automatically
result1, _ := app.Send(ctx, threadID, "What is Go?")
result2, _ := app.Send(ctx, threadID, "How does it handle concurrency?") // remembers context
// Read thread history
thread, _ := app.GetThread(ctx, threadID)
for _, msg := range thread.Messages {
fmt.Printf("[%s] %s\n", msg.Role, msg.Content)
}
// List checkpoints (snapshots after each interaction)
checkpoints, _ := app.ListCheckpoints(ctx, threadID, 10)
// List all threads
threads, _ := app.ListThreads(ctx, "user-123", 20, 0)Implement the standard langchaingo/tools.Tool interface:
type WebSearchTool struct {
apiKey string
}
func (t *WebSearchTool) Name() string { return "web_search" }
func (t *WebSearchTool) Description() string { return "Search the web. Input: JSON {\"query\": \"search terms\"}" }
func (t *WebSearchTool) Call(ctx context.Context, input string) (string, error) {
var args struct {
Query string `json:"query"`
}
json.Unmarshal([]byte(input), &args)
// ... perform search ...
return results, nil
}Load domain knowledge from Markdown files with YAML frontmatter:
---
name: security-audit
description: Smart contract security methodology
---
# Security Audit Process
1. Check for reentrancy vulnerabilities
2. Verify access control patterns
3. Review arithmetic operations for overflow
...// Load from file
skill, err := agent.SkillFromFile("skills/security-audit.md")
// Load from embedded filesystem
skill, err := agent.SkillFromEmbed(embedFS, "skills/security-audit.md")
// Create inline
skill := agent.NewSkill("math", "Math helper", "You can solve equations...")
// Use in agent config
cfg := agent.AgentConfig{
Skills: []agent.Skill{skill},
// ...
}Plug into the agent execution pipeline:
// Built-in middleware
agent.LoggingMiddleware() // Log all LLM calls
agent.AnthropicSanitizeMiddleware() // Auto-injected for Claude models
agent.TodoListMiddleware() // Adds todo list tool
agent.SummarizationMiddleware(cfg) // Context window management
// Custom middleware
func MyMiddleware() agent.Middleware {
return agent.Middleware{
Name: "my-middleware",
OnInvoke: func(ctx *agent.InvokeContext, next func()) {
// Before LLM call
fmt.Printf("Agent %s, iteration %d\n", ctx.AgentName, ctx.Iteration)
next() // Continue chain
// After LLM call
fmt.Printf("Output: %s\n", ctx.Output)
},
}
}Start a production-ready HTTP server compatible with LangGraph Studio:
import "github.com/denizumutdereli/go-deepagent/pkg/server"
srv, err := server.New(server.ServerConfig{
App: app,
Port: "8080",
Runner: server.RunnerConfig{
MaxConcurrent: 50,
RunTimeout: 5 * time.Minute,
ShutdownTimeout: 30 * time.Second,
},
})
srv.Start()GET /api/health Health check
POST /threads Create thread
POST /threads/search Search threads
GET /threads/{id} Get thread + messages
DELETE /threads/{id} Delete thread
GET /threads/{id}/history Checkpoint history
POST /threads/{id}/runs Background run
POST /threads/{id}/runs/stream Run + SSE stream
POST /threads/{id}/runs/wait Run + wait for result
POST /runs Stateless background run
POST /runs/stream Stateless run + SSE stream
POST /runs/wait Stateless run + wait
GET /runs/{id} Get run status
GET /runs/{id}/stream Reconnect to SSE stream
GET /runs/{id}/wait Wait for completion
POST /runs/{id}/cancel Cancel run
r := chi.NewRouter()
r.Use(myAuthMiddleware)
r.Get("/custom", myHandler)
srv, _ := server.New(server.ServerConfig{
App: app,
Router: r, // Agent Protocol routes are mounted on your router
})srv, _ := server.New(server.ServerConfig{
App: app,
SSEFormatter: func(evt agent.ReactEvent) (string, []byte) {
// Custom SSE event formatting
return string(evt.Type), json.Marshal(evt)
},
})// Automatically used when no Store is provided
app, _ := agent.New(cfg, apiKey)import "github.com/denizumutdereli/go-deepagent/pkg/store"
ms, err := store.ConnectMongo(ctx, "mongodb://localhost:27017", "mydb")
app, _ := agent.New(agent.AgentConfig{
Store: ms,
// ...
}, apiKey)Implement the agent.Store interface:
type Store interface {
CreateThread(ctx context.Context, t *protocol.Thread) error
GetThread(ctx context.Context, id string) (*protocol.Thread, error)
SearchThreads(ctx context.Context, userID string, limit, offset int) ([]*protocol.Thread, error)
SetThreadStatus(ctx context.Context, id string, status protocol.ThreadStatus) error
AppendMessage(ctx context.Context, threadID string, msg protocol.Message) error
DeleteThread(ctx context.Context, id string) error
SaveCheckpoint(ctx context.Context, cp *protocol.Checkpoint) error
GetLatestCheckpoint(ctx context.Context, threadID string) (*protocol.Checkpoint, error)
ListCheckpoints(ctx context.Context, threadID string, limit int, before string) ([]*protocol.Checkpoint, error)
}An interactive CLI with X/Twitter research (via XAI Grok), web search (via Tavily), and casual chat — plus an --serve mode for HTTP API.
cd examples/researcher
cp .env.example .env # fill in your keys
go run .Features:
- 3 sub-agents: researcher (XAI), websearch (Tavily), casual (GPT)
- Thread management:
new,thread,threads,checkpoints - Server mode:
go run . --serve 8080 - MongoDB support:
go run . --store mongodb://localhost:27017/researcher
A blockchain security auditor with real on-chain tools — Etherscan API, Alchemy RPC, ABI decoding — across multiple chains.
cd examples/onchain-auditor
cp .env.example .env # fill in your keys
go run .Features:
- 4 sub-agents: security-auditor (Gemini), token-analyst (GPT), tx-investigator (GPT), chain-scanner (XAI)
- Multi-chain support: Ethereum, Polygon, BSC, Arbitrum, Optimism, Base, Avalanche
- Real on-chain tools: contract source, token transfers, balances, event logs, ABI decoding
- Security audit skills: SWC attack vectors, DeFi patterns, audit methodology
Every agent automatically receives these filesystem tools:
| Tool | Description |
|---|---|
ls |
List directory contents |
read_file |
Read file contents |
write_file |
Write content to a file |
edit_file |
Find-and-replace edit |
grep |
Search file contents |
glob |
Find files by pattern |
The VFS backend is pluggable — default is in-memory (afero.MemMapFs), but you can use OS filesystem or any afero.Fs implementation.
# Run all tests
go test ./...
# Run with verbose output
go test -v ./pkg/agent/...
# E2E tests (require API keys in .env.test)
cp .env.example .env.test
# Fill in your keys
go test -v ./pkg/agent/ -run TestE2E# Start MongoDB (for persistent storage)
docker-compose up -d
# Stop
docker-compose down- langchaingo — The Go LLM framework that makes this possible. Thank you for the excellent work on bringing LLM tooling to the Go ecosystem.
Contributions are welcome! Please see CONTRIBUTING.md for guidelines.