Skip to content

ohnodev/obelisk-core

Repository files navigation

Obelisk Core

Obelisk Core

Open-source AI agent framework with a visual workflow editor, self-hosted inference, and one-click deployment

Version MIT License Status TypeScript Python (Inference)

🌐 Website Β· 𝕏 X (Twitter) Β· πŸ’¬ Telegram

Obelisk Core is an open-source framework for building, running, and deploying AI agents. Design workflows visually, connect to a self-hosted LLM, and deploy autonomous agents β€” all from your own hardware.

Status: 🟒 Beta β€” v0.2.0-beta


How It Works

Obelisk Core has three components that work together:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚         Visual Workflow Editor   β”‚     ← Browser UI (Next.js)
β”‚   Design agent workflows with    β”‚     Build, test, and deploy
β”‚   drag-and-drop nodes            β”‚     workflows visually
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚ executes
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚      TypeScript Execution Engine β”‚     ← Agent Runtime (Node.js)
β”‚   Runs workflows as autonomous   β”‚     Nodes: inference, Telegram,
β”‚   agents in Docker containers    β”‚     memory, scheduling, Clanker, etc.
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚ calls
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚      Python Inference Service    β”‚     ← LLM Server (FastAPI + PyTorch)
β”‚   Self-hosted Qwen3 model with   β”‚     Runs on GPU, serves via
β”‚   thinking mode and API auth     β”‚     REST API with auth
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
  1. UI β€” A visual node editor (like ComfyUI) where you wire up agent workflows
  2. Execution Engine β€” TypeScript runtime that processes workflows node-by-node, runs agents in Docker containers
  3. Inference Service β€” Python FastAPI server that loads and serves a local LLM (Qwen3-0.6B) on your GPU

Features

  • Visual Workflow Editor β€” Drag-and-drop node-based editor to design agent logic
  • Self-Hosted LLM β€” Qwen3-0.6B with thinking mode, no external API calls required
  • Autonomous Agents β€” Deploy workflows as long-running Docker containers
  • Telegram Integration β€” Listener and sender nodes for building Telegram bots
  • Conversation Memory β€” Persistent memory with automatic summarization
  • Binary Intent β€” Yes/no decision nodes for conditional workflow logic
  • Wallet Authentication β€” Privy-based wallet connect for managing deployed agents
  • Clanker / Blockchain β€” Blockchain Config, Clanker Launch Summary (recent launches + stats for LLM), Wallet node, Clanker Buy/Sell (V4 swaps via CabalSwapper), Action Router; onSwap trigger (last_swap.json) for a second loop: On Swap Trigger β†’ Bag Checker (profit/stop-loss) β†’ Clanker Sell; bag state (clanker_bags.json) for holdings and targets
  • Scheduling β€” Cron-like scheduling nodes for periodic tasks
  • One-Click Deploy β€” Deploy agents from the UI with environment variable injection

Quick Start

Prerequisites

  • Node.js 20+ and npm
  • Python 3.10–3.12 with a CUDA-capable GPU (for the inference service)
  • Docker (for running deployed agents)

1. Clone the repo

git clone https://github.com/ohnodev/obelisk-core.git
cd obelisk-core

2. Start the Inference Service (Python)

The inference service hosts the LLM model and serves it via REST API.

# Create Python venv and install dependencies
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

# Configure (optional β€” defaults work for local dev)
cp .env.example .env
# Edit .env if you want to set an API key or change the port

# Start the inference service
python3 -m uvicorn src.inference.server:app --host 127.0.0.1 --port 7780

The first run downloads the Qwen3-0.6B model (~600MB). Once running, test it:

curl http://localhost:7780/health

3. Start the Execution Engine (TypeScript)

cd ts
npm install
npm run build
cd ..

4. Start the UI

cd ui
npm install
npm run dev

Open http://localhost:3000 in your browser. You should see the visual workflow editor.

5. Run your first workflow

  1. The default workflow is pre-loaded β€” it includes a Telegram bot setup
  2. Click Queue Prompt (β–Ά) to execute the workflow
  3. The output appears in the output nodes on the canvas

Using PM2 (Recommended for Production)

We provide a pm2-manager.sh script that manages both services:

# Start everything
./pm2-manager.sh start

# Restart services (clears logs)
./pm2-manager.sh restart

# Stop everything
./pm2-manager.sh stop

# View status
./pm2-manager.sh status

# View logs
./pm2-manager.sh logs

PM2 keeps the inference service and execution engine running, auto-restarts on crashes, and manages log files.

Agent Deployment

Agents are workflows packaged into Docker containers that run autonomously.

Building the Agent Image

docker build -t obelisk-agent:latest -f docker/Dockerfile .

Deploying from the UI

  1. Connect your wallet in the UI toolbar
  2. Design your workflow (or use the default)
  3. Click Deploy β€” the UI sends the workflow to your deployment service
  4. The agent runs in a Docker container on your machine
  5. Manage running agents at /deployments

Running an Agent Manually

docker run -d \
  --name my-agent \
  -e WORKFLOW_JSON='<your workflow JSON>' \
  -e AGENT_ID=agent-001 \
  -e AGENT_NAME="My Bot" \
  -e INFERENCE_SERVICE_URL=http://host.docker.internal:7780 \
  -e TELEGRAM_BOT_TOKEN=your_token \
  obelisk-agent:latest

See docker/README.md for full details on environment variables, resource limits, and Docker Compose.

Available Nodes

Node Description
Text Static text input/output
Inference Calls the LLM via the inference service
Inference Config Configures model parameters (temperature, max tokens, thinking mode)
Binary Intent Yes/no classification for conditional logic
Telegram Listener Polls for incoming Telegram messages
TG Send Message Sends messages via Telegram Bot API (supports quote-reply)
Memory Creator Creates conversation summaries
Memory Selector Retrieves relevant memories for context
Memory Storage Persists memories to storage
Telegram Memory Creator Telegram-specific memory summarization
Telegram Memory Selector Telegram-specific memory retrieval
Scheduler Cron-based scheduling for periodic execution

Project Structure

obelisk-core/
β”œβ”€β”€ src/inference/          # Python inference service (FastAPI + PyTorch)
β”‚   β”œβ”€β”€ server.py           # REST API server
β”‚   β”œβ”€β”€ model.py            # LLM loading and generation
β”‚   β”œβ”€β”€ queue.py            # Async request queue
β”‚   └── config.py           # Inference configuration
β”œβ”€β”€ ts/                     # TypeScript execution engine
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ core/           # Workflow runner, node execution
β”‚   β”‚   β”‚   └── execution/
β”‚   β”‚   β”‚       β”œβ”€β”€ runner.ts
β”‚   β”‚   β”‚       └── nodes/  # All node implementations
β”‚   β”‚   └── utils/          # JSON parsing, logging, etc.
β”‚   └── tests/              # Vitest test suite
β”œβ”€β”€ ui/                     # Next.js visual workflow editor
β”‚   β”œβ”€β”€ app/                # Pages (editor, deployments)
β”‚   β”œβ”€β”€ components/         # React components (Canvas, Toolbar, nodes)
β”‚   └── lib/                # Utilities (litegraph, wallet, API config)
β”œβ”€β”€ docker/                 # Dockerfile and compose for agent containers
β”œβ”€β”€ pm2-manager.sh          # PM2 process manager script
β”œβ”€β”€ requirements.txt        # Python deps (inference service only)
└── .env.example            # Environment variable template

Configuration

Copy .env.example to .env:

cp .env.example .env

Key variables:

Variable Description Default
INFERENCE_HOST Inference service bind address 127.0.0.1
INFERENCE_PORT Inference service port 7780
INFERENCE_API_KEY API key for inference auth (optional for local dev) β€”
INFERENCE_DEVICE PyTorch device (cuda, cpu) auto-detect
INFERENCE_SERVICE_URL URL agents use to reach inference http://localhost:7780
TELEGRAM_DEV_AGENT_BOT_TOKEN Default Telegram bot token for dev β€”
TELEGRAM_CHAT_ID Default Telegram chat ID for dev β€”

For remote inference setup (GPU VPS), see INFERENCE_SERVER_SETUP.md.

Documentation

License

This project is licensed under the MIT License β€” see the LICENSE file for details.

Contributing

Contributions are welcome! See CONTRIBUTING.md for guidelines.


Built with ❀️ by The Obelisk

About

The consciousness engine for The Obelisk 🧠

Topics

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •