Open-source AI agent framework with a visual workflow editor, self-hosted inference, and one-click deployment
π Website Β· π X (Twitter) Β· π¬ Telegram
Obelisk Core is an open-source framework for building, running, and deploying AI agents. Design workflows visually, connect to a self-hosted LLM, and deploy autonomous agents β all from your own hardware.
Status: π’ Beta β v0.2.0-beta
Obelisk Core has three components that work together:
ββββββββββββββββββββββββββββββββββββ
β Visual Workflow Editor β β Browser UI (Next.js)
β Design agent workflows with β Build, test, and deploy
β drag-and-drop nodes β workflows visually
ββββββββββββββββ¬ββββββββββββββββββββ
β executes
ββββββββββββββββΌββββββββββββββββββββ
β TypeScript Execution Engine β β Agent Runtime (Node.js)
β Runs workflows as autonomous β Nodes: inference, Telegram,
β agents in Docker containers β memory, scheduling, Clanker, etc.
ββββββββββββββββ¬ββββββββββββββββββββ
β calls
ββββββββββββββββΌββββββββββββββββββββ
β Python Inference Service β β LLM Server (FastAPI + PyTorch)
β Self-hosted Qwen3 model with β Runs on GPU, serves via
β thinking mode and API auth β REST API with auth
ββββββββββββββββββββββββββββββββββββ
- UI β A visual node editor (like ComfyUI) where you wire up agent workflows
- Execution Engine β TypeScript runtime that processes workflows node-by-node, runs agents in Docker containers
- Inference Service β Python FastAPI server that loads and serves a local LLM (Qwen3-0.6B) on your GPU
- Visual Workflow Editor β Drag-and-drop node-based editor to design agent logic
- Self-Hosted LLM β Qwen3-0.6B with thinking mode, no external API calls required
- Autonomous Agents β Deploy workflows as long-running Docker containers
- Telegram Integration β Listener and sender nodes for building Telegram bots
- Conversation Memory β Persistent memory with automatic summarization
- Binary Intent β Yes/no decision nodes for conditional workflow logic
- Wallet Authentication β Privy-based wallet connect for managing deployed agents
- Clanker / Blockchain β Blockchain Config, Clanker Launch Summary (recent launches + stats for LLM), Wallet node, Clanker Buy/Sell (V4 swaps via CabalSwapper), Action Router; onSwap trigger (last_swap.json) for a second loop: On Swap Trigger β Bag Checker (profit/stop-loss) β Clanker Sell; bag state (clanker_bags.json) for holdings and targets
- Scheduling β Cron-like scheduling nodes for periodic tasks
- One-Click Deploy β Deploy agents from the UI with environment variable injection
- Node.js 20+ and npm
- Python 3.10β3.12 with a CUDA-capable GPU (for the inference service)
- Docker (for running deployed agents)
git clone https://github.com/ohnodev/obelisk-core.git
cd obelisk-coreThe inference service hosts the LLM model and serves it via REST API.
# Create Python venv and install dependencies
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# Configure (optional β defaults work for local dev)
cp .env.example .env
# Edit .env if you want to set an API key or change the port
# Start the inference service
python3 -m uvicorn src.inference.server:app --host 127.0.0.1 --port 7780The first run downloads the Qwen3-0.6B model (~600MB). Once running, test it:
curl http://localhost:7780/healthcd ts
npm install
npm run build
cd ..cd ui
npm install
npm run devOpen http://localhost:3000 in your browser. You should see the visual workflow editor.
- The default workflow is pre-loaded β it includes a Telegram bot setup
- Click Queue Prompt (βΆ) to execute the workflow
- The output appears in the output nodes on the canvas
We provide a pm2-manager.sh script that manages both services:
# Start everything
./pm2-manager.sh start
# Restart services (clears logs)
./pm2-manager.sh restart
# Stop everything
./pm2-manager.sh stop
# View status
./pm2-manager.sh status
# View logs
./pm2-manager.sh logsPM2 keeps the inference service and execution engine running, auto-restarts on crashes, and manages log files.
Agents are workflows packaged into Docker containers that run autonomously.
docker build -t obelisk-agent:latest -f docker/Dockerfile .- Connect your wallet in the UI toolbar
- Design your workflow (or use the default)
- Click Deploy β the UI sends the workflow to your deployment service
- The agent runs in a Docker container on your machine
- Manage running agents at
/deployments
docker run -d \
--name my-agent \
-e WORKFLOW_JSON='<your workflow JSON>' \
-e AGENT_ID=agent-001 \
-e AGENT_NAME="My Bot" \
-e INFERENCE_SERVICE_URL=http://host.docker.internal:7780 \
-e TELEGRAM_BOT_TOKEN=your_token \
obelisk-agent:latestSee docker/README.md for full details on environment variables, resource limits, and Docker Compose.
| Node | Description |
|---|---|
| Text | Static text input/output |
| Inference | Calls the LLM via the inference service |
| Inference Config | Configures model parameters (temperature, max tokens, thinking mode) |
| Binary Intent | Yes/no classification for conditional logic |
| Telegram Listener | Polls for incoming Telegram messages |
| TG Send Message | Sends messages via Telegram Bot API (supports quote-reply) |
| Memory Creator | Creates conversation summaries |
| Memory Selector | Retrieves relevant memories for context |
| Memory Storage | Persists memories to storage |
| Telegram Memory Creator | Telegram-specific memory summarization |
| Telegram Memory Selector | Telegram-specific memory retrieval |
| Scheduler | Cron-based scheduling for periodic execution |
obelisk-core/
βββ src/inference/ # Python inference service (FastAPI + PyTorch)
β βββ server.py # REST API server
β βββ model.py # LLM loading and generation
β βββ queue.py # Async request queue
β βββ config.py # Inference configuration
βββ ts/ # TypeScript execution engine
β βββ src/
β β βββ core/ # Workflow runner, node execution
β β β βββ execution/
β β β βββ runner.ts
β β β βββ nodes/ # All node implementations
β β βββ utils/ # JSON parsing, logging, etc.
β βββ tests/ # Vitest test suite
βββ ui/ # Next.js visual workflow editor
β βββ app/ # Pages (editor, deployments)
β βββ components/ # React components (Canvas, Toolbar, nodes)
β βββ lib/ # Utilities (litegraph, wallet, API config)
βββ docker/ # Dockerfile and compose for agent containers
βββ pm2-manager.sh # PM2 process manager script
βββ requirements.txt # Python deps (inference service only)
βββ .env.example # Environment variable template
Copy .env.example to .env:
cp .env.example .envKey variables:
| Variable | Description | Default |
|---|---|---|
INFERENCE_HOST |
Inference service bind address | 127.0.0.1 |
INFERENCE_PORT |
Inference service port | 7780 |
INFERENCE_API_KEY |
API key for inference auth (optional for local dev) | β |
INFERENCE_DEVICE |
PyTorch device (cuda, cpu) |
auto-detect |
INFERENCE_SERVICE_URL |
URL agents use to reach inference | http://localhost:7780 |
TELEGRAM_DEV_AGENT_BOT_TOKEN |
Default Telegram bot token for dev | β |
TELEGRAM_CHAT_ID |
Default Telegram chat ID for dev | β |
For remote inference setup (GPU VPS), see INFERENCE_SERVER_SETUP.md.
- Quick Start Guide β Get running in 5 minutes
- Inference API β Inference service endpoints
- Inference Server Setup β Deploy inference on a GPU VPS
- Docker Agents β Build and run agent containers
- UI Guide β Visual workflow editor
- Contributing β How to contribute
- Security β Security best practices
- Changelog β Version history
This project is licensed under the MIT License β see the LICENSE file for details.
Contributions are welcome! See CONTRIBUTING.md for guidelines.
Built with β€οΈ by The Obelisk
