This repository uses a multi-service Docker setup where a single Dockerfile can run any of the three Truffle services based on environment variables.
# Build the multi-architecture image locally
docker buildx build --platform linux/amd64,linux/arm64 -t truffle --load .
# Run individual services
docker run -e SERVICE_NAME=slack_bot -p 3001:8000 truffle
docker run -e SERVICE_NAME=ingestor -p 3002:8000 truffle
docker run -e SERVICE_NAME=expert_api -p 3003:8000 truffle
# Or use docker-compose to run all services
docker-compose up- Single Dockerfile: Installs dependencies for all services
- Environment-based selection:
SERVICE_NAMEdetermines which service runs - Independent scaling: Deploy multiple containers of the same service
- Shared base image: Reduces total image size and maintenance
SERVICE_NAME:slack_bot,ingestor, orexpert_api
Slack Bot:
SLACK_BOT_HOST=0.0.0.0SLACK_BOT_PORT=8000SLACK_BOT_AUTH_TOKEN=xoxb-your-tokenEXPERT_API_URL=http://expert-api:8000
Ingestor:
INGESTOR_HOST=0.0.0.0INGESTOR_PORT=8000DATABASE_URL=postgresql+asyncpg://user:pass@host:5432/dbSLACK_API_TOKEN=xoxp-your-tokenOPENAI_API_KEY=sk-your-keyINGESTION_CRON=0 */6 * * *
Expert API:
EXPERT_API_HOST=0.0.0.0EXPERT_API_PORT=8000DATABASE_URL=postgresql+asyncpg://user:pass@host:5432/db
SENTRY_DSN=https://your-sentry-dsnLOG_LEVEL=INFODEBUG=false
- Copy
docker-compose.ymland customize environment variables - Create
.envfile with sensitive values:SLACK_BOT_AUTH_TOKEN=xoxb-your-token SLACK_API_TOKEN=xoxp-your-token OPENAI_API_KEY=sk-your-key SENTRY_DSN=https://your-sentry-dsn POSTGRES_PASSWORD=secure-password
- Deploy:
docker-compose up -d
All services expose /health endpoints on their respective ports for monitoring and load balancer health checks.
The Dockerfile uses:
- Multi-stage builds: Build dependencies separate from runtime (45-50% size reduction)
- Alpine Linux base: Python 3.13-alpine (~75MB smaller than slim)
- Multi-architecture support: Builds for both AMD64 (servers) and ARM64 (Apple Silicon)
- Security hardening: Non-root user, minimal runtime dependencies
- Fast package installation:
uvfor optimized Python package management - Layer optimization: Optimal caching and
.dockerignoreexclusions
- Slack Bot: Can run multiple replicas behind a load balancer
- Expert API: Stateless, scales horizontally
- Ingestor: Single instance recommended (scheduled jobs), or use external job queue for scaling
# Create a buildx builder for multi-architecture builds
docker buildx create --use --name truffle-builder
# Verify builder supports multiple platforms
docker buildx inspect truffle-builder --bootstrapCreate a GitHub Personal Access Token with write:packages and read:packages scopes:
# Set your GitHub token (can be added to .envrc)
export GITHUB_TOKEN=ghp_your_token_here
# Login to ghcr.io
echo $GITHUB_TOKEN | docker login ghcr.io -u YOUR_USERNAME --password-stdin# Build for both AMD64 (servers) and ARM64 (Apple Silicon) and push
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t ghcr.io/ORGANIZATION/truffle:latest \
--push .
# Example:
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t ghcr.io/getsentry/truffle:latest \
--push .# Check that both architectures are available
docker buildx imagetools inspect ghcr.io/ORGANIZATION/truffle:latestUpdate your docker-compose.yml to use the published image instead of building locally:
services:
slack-bot:
image: ghcr.io/ORGANIZATION/truffle:latest
# Remove: build: .
environment:
- SERVICE_NAME=slack_bot
# ... other env vars- Go to your GitHub repository → Packages → truffle
- Change package visibility to public if needed
- This allows others to pull without authentication
For CI/CD, add this to your GitHub Actions workflow:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push multi-architecture image
uses: docker/build-push-action@v5
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: ghcr.io/${{ github.repository }}:latest- AMD64 (x86_64): For production servers, Railway, AWS, GCP, Azure
- ARM64 (aarch64): For Apple Silicon Macs, AWS Graviton, newer ARM servers
- Automatic selection: Docker automatically pulls the correct architecture
The optimized Dockerfile provides significant size reductions:
- Before optimization: ~720MB (single-stage, Python slim, all dependencies)
- After optimization: ~350-400MB (45-50% reduction)
- Multi-stage builds: Build dependencies removed from final image
- Alpine Linux base: ~75MB smaller than Python slim
- Non-root user: Security hardening with minimal overhead
- Optimized layer caching: Dependencies installed before source code
- Production-only packages: Dev dependencies excluded
- Efficient package manager:
uvfor faster, smaller installs
# Dependencies change rarely - cached layer
COPY */pyproject.toml ./*/
RUN uv pip install -e .
# Source code changes frequently - separate layer
COPY src/ ./src/This ensures dependency layers are cached and only rebuilt when pyproject.toml files change.