Releases: freddy-schuetz/ai-launchkit
v1.18.0
v1.18.0 - code-server (VS Code in the Browser)
Release Date: 2026-01-26
✨ New Features
🖥️ code-server - VS Code in the Browser
Full Visual Studio Code experience running in your browser with AI coding assistant support.
- AI Coding Extensions: Support for Claude Code, OpenCode, Continue, and other AI assistants
- Persistent Workspace: Files, settings, and extensions persist across sessions
- Shared Folder Integration: Direct access to AI LaunchKit's
./sharedfolder - Full Terminal Access: Complete Linux terminal with sudo capabilities
- Extension Marketplace: Access to Open VSX Registry
Access: https://code.yourdomain.com
New Environment Variables:
CODESERVER_HOSTNAME- Subdomain for code-serverCODESERVER_PASSWORD- Login password (auto-generated)CODESERVER_SUDO_PASSWORD- Sudo password for terminal (auto-generated)
📋 Installation
Select "code-server" in the installation wizard under AI-Powered Development tools.
🔧 Post-Installation: Enable AI Coding Tools
Node.js for Claude Code Extension
code-server requires Node.js 18+ for the Claude Code extension. This is automatically installed via init script on container start.
Manual installation (if needed):
docker exec -it code-server bash -c "curl -fsSL https://deb.nodesource.com/setup_20.x | bash - && apt-get install -y nodejs"Install OpenCode CLI
docker exec -it code-server npm i -g opencode-aiInstall Ollama CLI (for ollama launch)
docker exec -it code-server bash -c "apt-get update && apt-get install -y zstd && curl -fsSL https://ollama.com/install.sh | sh"Configure OpenCode with Ollama Cloud
For cloud models (recommended for larger context windows):
-
Get your API key from https://ollama.com/settings/keys
-
Create the config file in code-server terminal:
mkdir -p /config/.config/opencode
cat > /config/.config/opencode/opencode.json << 'EOF'
{
"$schema": "https://opencode.ai/config.json",
"model": "ollama/glm-4.7:cloud",
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Ollama Cloud",
"options": {
"baseURL": "https://ollama.com/v1",
"apiKey": "YOUR_OLLAMA_API_KEY"
},
"models": {
"glm-4.7:cloud": {
"name": "GLM 4.7 Cloud"
}
}
}
}
}
EOF- Start OpenCode:
opencodeConfigure OpenCode with Local Ollama
For local models via AI LaunchKit's Ollama container:
mkdir -p /config/.config/opencode
cat > /config/.config/opencode/opencode.json << 'EOF'
{
"$schema": "https://opencode.ai/config.json",
"model": "ollama/qwen2.5:7b-instruct-q4_K_M",
"provider": {
"ollama": {
"npm": "@ai-sdk/openai-compatible",
"name": "Ollama Local",
"options": {
"baseURL": "http://ollama:11434/v1"
},
"models": {
"qwen2.5:7b-instruct-q4_K_M": {
"name": "Qwen 2.5 7B"
}
}
}
}
}
EOFn8n-MCP for Workflow Generation
If you have n8n-MCP enabled in AI LaunchKit, you can install the MCP client in code-server:
docker exec -it code-server npm i -g n8n-mcpConfiguration for OpenCode/Claude Code:
# In code-server terminal:
cat >> ~/.bashrc << 'EOF'
export N8N_MCP_URL="http://n8nmcp:3000"
export N8N_MCP_TOKEN="your-token-from-env"
export N8N_API_URL="http://n8n:5678"
export N8N_API_KEY="your-n8n-api-key"
EOF
source ~/.bashrcThis enables AI assistants to generate n8n workflows directly!
🔗 Integration with AI LaunchKit Services
| Service | Access from code-server |
|---|---|
| n8n | http://n8n:5678 |
| n8n-MCP | http://n8nmcp:3000 |
| Ollama (local) | http://ollama:11434 |
| Ollama Cloud | https://ollama.com/v1 |
| Gitea | http://gitea:3000 |
| Shared Folder | /config/workspace/shared |
📚 Documentation
Full documentation added to README.md including:
- Initial setup guide
- AI extension configuration (Claude Code, OpenCode)
- Ollama Cloud and local model configuration
- n8n-MCP integration for workflow generation
- Troubleshooting guide
Full Changelog: v1.17.2...v1.18.0
v1.17.2
🎉 AI LaunchKit is n8n 2.0 Ready!
n8n 2.0 was released on December 15, 2025 - and AI LaunchKit supports it out of the box!
New install:
git clone https://github.com/freddy-schuetz/ai-launchkit && cd ai-launchkit && sudo bash ./scripts/install.shExisting install:
cd ai-launchkit && sudo bash ./scripts/update.shWhat's Changed
Improvements
- 📖 Update LaunchKit with n8n 2.0 readiness
Bug Fixes
- 🔧 Fix InvoiceNinja PHP memory limit (512M)
- 🐛 fix(vexa): add SQL type mismatch patch for transcription-collector
Full Changelog: v1.17.1...v1.17.2
v1.17.1
🐛 Bugfix
n8n Task Runner Version Detection
- Issue: Automatic version detection for n8n Task Runners failed with "Could not find version matching stable digest"
- Cause: n8n changed their Docker Hub build process between October and November 2025 -
stableand version tags are now built separately with different digests - Fix: Version detection now uses Docker image labels (
org.opencontainers.image.version) instead of digest matching - 100% reliable regardless of build process
v1.17.0
AI LaunchKit v1.17.0 Release Notes
🎉 New Services (4 Added)
🏠 Homepage - Service Dashboard
- Beautiful, modern dashboard for all your services
- Auto-discovers running Docker containers
- Real-time resource monitoring (CPU, RAM, disk)
- No authentication required (designed for quick access)
- Customizable layout with categories and widgets
- Automatic configuration via
generate_homepage_config.sh
🔧 Gitea - Self-Hosted Git Server
- Complete GitHub/GitLab alternative
- Built-in CI/CD with Gitea Actions
- Package registry (Docker, npm, PyPI, Maven)
- SSH support on port 2222
- Issue tracking, wikis, and project management
- Minimal resource usage
✍️ DocuSeal - E-Signature Platform
- Open-source DocuSign alternative
- Legally binding electronic signatures
- Document templates with form builder
- Multi-signer workflows
- Complete audit trail
- REST API for automation
📚 Outline - Team Wiki with Self-Hosted Auth
- Modern knowledge base with real-time collaboration
- Fully self-hosted authentication using Dex identity provider
- No external dependencies (Google/Slack/etc. not required!)
- Markdown editor with slash commands
- Collections, templates, and granular permissions
- MinIO S3 storage included
🔧 Technical Improvements
Authentication
- Dex Identity Provider integrated for Outline
- Completely self-hosted OIDC authentication
- Admin user auto-generated during installation
- Support for multiple users via Dex configuration
Security Fixes
- Fixed Outline
SECRET_KEYgeneration (must be 64 hex characters) - Improved secret generation with new
hex:32type - Added proper permission handling for Homepage (uid 1000)
Configuration
- New environment variables:
DEX_HOSTNAME- Subdomain for Dex identity providerDEX_ADMIN_EMAIL- Auto-configured admin emailDEX_ADMIN_PASSWORD- Generated admin password- Homepage configuration auto-generated
Scripts
- Added
generate_homepage_config.shfor automatic dashboard setup - Added
setup_dex_config.shfor Dex configuration - Enhanced
generate_secrets.shwith hex key generation - Updated installation and update scripts for new services
📦 Installation
Fresh Installation:
git clone https://github.com/freddy-schuetz/ai-launchkit
cd ai-launchkit
sudo bash ./scripts/install.shUpdate Existing Installation:
cd ai-launchkit
sudo bash ./scripts/update.sh🚀 Service URLs
After installation, your services will be available at:
- Homepage:
https://dashboard.yourdomain.com - Gitea:
https://git.yourdomain.com - DocuSeal:
https://sign.yourdomain.com - Outline:
https://outline.yourdomain.com - Dex (Auth):
https://auth.yourdomain.com
📝 Important Notes
Outline Authentication
- First self-hosted wiki solution with completely local authentication
- No Google, Slack, or other external OAuth required
- Admin credentials displayed in installation report
- Additional users can be added via Dex configuration
Gitea SSH
- SSH runs on port 2222 to avoid conflicts
- Clone via:
ssh://git@git.yourdomain.com:2222/user/repo.git
Homepage Dashboard
- Public by default (no authentication)
- Recommended to use behind VPN for security
- Auto-discovers and displays all running services
🐛 Bug Fixes
- Fixed Homepage host validation issues
- Fixed Outline key length validation errors
- Fixed Dex configuration generation
- Fixed Homepage permission errors (500 Internal Server Error)
- Resolved Docker mount type conflicts (file vs directory)
🔄 Breaking Changes
None - All existing services continue to work as before
📚 Documentation
- Added comprehensive service documentation for all 4 new tools
- Includes n8n integration examples for each service
- Troubleshooting guides and best practices
- API usage examples and automation workflows
👥 Contributors
- E&P (Friedemann Schütz) - Lead Developer
🙏 Acknowledgments
Special thanks to the open-source projects:
- [Homepage](https://gethomepage.dev)
- [Gitea](https://gitea.com)
- [DocuSeal](https://www.docuseal.co)
- [Outline](https://www.getoutline.com)
- [Dex](https://dexidp.io)
📊 Stats
- Total Services: Now 64+ tools available
- New Lines of Code: ~2,500+
- Configuration Templates: 8 new files
- Docker Images: 5 new containers
Full Changelog: [v1.16.0...v1.17.0](v1.16.0...v1.17.0)
v1.16.0
📋 Release Notes for v1.16.0 - UPDATED
🎉 AI LaunchKit v1.16.0 - Paperless AI Suite & Enhanced Document Intelligence
🚀 New Features
📄 Paperless-AI Suite (New Service Profile: paperless-ai)
-
paperless-gpt: Superior OCR using OpenAI Vision API or Ollama
- Dramatically improved text extraction from poor quality scans
- Excellent handwriting recognition
- Multi-language support with better accuracy than traditional OCR
- Web UI with manual review and OCR status tabs
- Basic Auth protection via Caddy
-
paperless-ai: RAG-powered document chat and semantic search
- Natural language queries across all documents
- Automatic document analysis and tagging
- Ask questions like "What was my last invoice amount?"
- Semantic search finds documents by meaning, not just keywords
- Own authentication system (separate from Basic Auth)
🔧 Improvements
-
Enhanced Documentation:
- Complete setup guide with token permission requirements
- Step-by-step API token generation with Django admin instructions
- Known issues and workarounds clearly documented
- Critical restart requirements after configuration changes
-
Service Integration:
- Seamless integration with existing Paperless-ngx installation
- Shared PostgreSQL and Redis instances for efficiency
- Automatic Caddy configuration for both new services
🐛 Bug Fixes
- Fixed port binding issues in paperless-gpt (removed LISTEN_INTERFACE)
- Corrected service dependencies (paperless vs paperless-ngx)
- Added proper fallback handling for missing API tokens
⚠️ Known Issues & Required Workarounds
Critical Setup Requirements:
- API Token must have FULL permissions (not just read access)
- Full restart required after token configuration:
docker compose -p localai downthenup -d - RAG fix required for paperless-ai: Must manually add PAPERLESS_URL variable
Upstream Bugs:
- paperless-gpt: Documents require at least one tag for update operations (GitHub #659)
- paperless-ai: Uses inconsistent ENV variable names (PAPERLESS_API_URL vs PAPERLESS_URL)
📝 Breaking Changes
None - fully backward compatible with v1.15.x installations
🔄 Installation & Upgrade Instructions
For new installations:
sudo bash scripts/install.sh
# Select "paperless-ai" from the service list
# Follow the detailed setup in the final reportFor existing installations:
sudo bash scripts/update.sh
# Then follow these CRITICAL steps:
# 1. Generate API token with FULL permissions in Paperless Django Admin
# 2. Add to .env: PAPERLESS_API_TOKEN=your_token
# 3. MUST do full restart:
docker compose -p localai down
docker compose -p localai up -d
# 4. Configure paperless-ai web UI
# 5. Apply RAG fix:
docker exec paperless-ai sh -c "echo 'PAPERLESS_URL=http://paperless-ngx:8000' >> /app/data/.env"
docker compose -p localai restart paperless-ai📚 Documentation Updates
- Updated README_Services.md with complete Paperless-AI suite details
- Enhanced final report with detailed setup sequence
- Added troubleshooting guide for common token/authentication issues
- Documented the critical
down/up -drestart requirement
⚙️ Technical Details
- Default configuration uses OpenAI API (CPU-friendly)
- Alternative Ollama configuration available for local processing
- paperless-gpt protected with Basic Auth via Caddy
- paperless-ai uses its own authentication system
- Both services connect to paperless-ngx via internal Docker network
🙏 Acknowledgments
- Thanks to [@icereed](https://github.com/icereed) for paperless-gpt
- Thanks to [@clusterzx](https://github.com/clusterzx) for paperless-ai
- Community testing and feedback for identifying setup issues
📌 Important Notes
This release requires more manual setup than usual due to upstream limitations:
- Paperless-ngx API tokens must be manually configured with full permissions
- The Django admin interface is required for proper token setup
- Full container restart (not just restart) is required after token changes
- paperless-ai requires a manual ENV variable addition for RAG functionality
Despite these setup requirements, the resulting functionality is worth it - transforming Paperless-ngx into an AI-powered document intelligence system.
Full Changelog: [v1.15.1...v1.16.0](https://github.com/yourusername/ai-launchkit/compare/v1.15.1...v1.16.0)
v1.15.1
🔐 Credentials Export & Download (v1.15.1)
Easy credential management after installation and updates with unified export functionality.
✨ What's New
Unified Export Scripts
- New
export_credentials.sh- Export all credentials to hostname-based TXT file - Enhanced
08_generate_vaultwarden_json.sh- Now with integrated download - Both scripts support
-dflag for auto-download mode - Hostname-based filenames:
credentials.YOUR-DOMAIN.txtandai-launchkit-credentials.json - Optional download via temporary HTTP server (60 seconds)
- Auto-delete after download for security
- Clean plain-text output (no Unicode artifacts)
Installation & Update Integration
- Smart export menu automatically appears at end of installation/update
- Simple prompt if only credentials needed
- Full menu when Vaultwarden is active (both exports available)
- Automatic 5-second delay between downloads when choosing "Both"
- All downloads use
-dflag for seamless experience
Export Options
- Vaultwarden JSON (for password manager import) - with download
- All Credentials TXT (readable text file) - with download
- Both (recommended if Vaultwarden installed) - sequential downloads
- Skip (can be run manually later if needed)
🔧 Technical Improvements
- Profile-based detection using
is_profile_activefunction - Consistent behavior across both export scripts
- Unicode box character cleanup for better text file readability
- Integrated HTTP server eliminates need for separate download script
- Updated
.gitignoreforcredentials.*.txtfiles
📝 Notes
- Credentials files are automatically deleted after successful download
- Files persist if download is skipped (manual deletion reminder shown)
- Can be run manually anytime:
sudo bash ./scripts/export_credentials.sh -dsudo bash ./scripts/08_generate_vaultwarden_json.sh -d
- Obsolete
download_credentials.shscript removed
Full Changelog: v1.15.0...v1.15.1
v1.15.0
🎉 Release Notes v1.15.0
🚀 What's New
✨ Airbyte Data Integration Platform
AI LaunchKit now includes Airbyte, the leading open-source data integration platform with 600+ connectors for syncing data from APIs, databases, and SaaS tools.
Key Features:
- 🔄 600+ Pre-built Connectors - Google Ads, Meta Ads, TikTok, Stripe, Shopify, Salesforce, HubSpot, and more
- 📊 Marketing Analytics - Sync marketing data to PostgreSQL for analysis in Metabase
- 🤖 n8n Integration - Trigger syncs via API, automate reports, handle failures
- 🔐 Full Control - Self-hosted, GDPR-compliant, no vendor lock-in
- ⚡ Incremental Sync - Only sync changed data for efficiency
- 🎯 No-Code UI - Visual connector builder for non-developers
Architecture:
- Runs via
abctl(Airbyte Command Line Tool) with Kind (Kubernetes in Docker) - Separate destination PostgreSQL database (
marketing_data) for synced data - Seamless integration with Metabase for dashboards and n8n for automation
Access: https://airbyte.yourdomain.com
📋 System Requirements
⚠️ CRITICAL - New Requirement
inotify limits must be increased on some servers (especially high-density VPS) before installation:
# Check current limits
cat /proc/sys/fs/inotify/max_user_instances
# If below 8192, increase it:
sudo sysctl fs.inotify.max_user_instances=8192
echo "fs.inotify.max_user_instances=8192" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
# Restart Docker
sudo systemctl restart dockerMinimum Resources for Airbyte:
- 8GB RAM (16GB recommended for production)
- 4 CPU cores
- 20GB free disk space
🔧 Technical Changes
Port Configuration
Airbyte uses non-standard ports to avoid conflicts with Supabase:
- Web UI: Port
8001(instead of 8000 to avoid Kong API Gateway) - Destination DB: Port
5433(instead of 5432 to avoid Supabase PostgreSQL)
Network Configuration
UFW Firewall Rules (auto-configured during installation):
sudo ufw allow from 172.19.0.0/16 to any port 5433 comment 'Kind to Airbyte Destination DB'
sudo ufw allow from 172.18.0.0/16 to any port 8001 comment 'Caddy to Airbyte'Caddy Routing:
- Routes to Airbyte via Docker gateway IP (varies per server)
- Common IPs:
172.18.0.1,172.19.0.1,172.20.0.1 - If you get 502 errors, see troubleshooting in README_Services.md
Environment Variables
New variables in .env:
AIRBYTE_HOSTNAME=airbyte.yourdomain.com
AIRBYTE_PASSWORD=<auto-generated>
AIRBYTE_DESTINATION_DB_PASSWORD=<auto-generated>📚 Documentation
New Documentation Files
- Comprehensive Airbyte Guide in
README_Services.md:- Installation and setup
- System requirements
- Network configuration
- n8n integration (3 methods with examples)
- Metabase integration with SQL queries
- Common source connector setup (Google Ads, Meta, Stripe, Shopify)
- Advanced features (CDC, transformations, incremental sync)
- Troubleshooting guide
- Monitoring and best practices
🐛 Bug Fixes
- Fixed port conflicts with Supabase Kong (8000 → 8001)
- Fixed port conflicts with Supabase PostgreSQL (5432 → 5433)
- Fixed Docker network routing for Kind cluster
- Fixed systemd boot failures due to low inotify limits
🔄 Upgrade Instructions
For Existing Installations
Update your installation:
cd ~/ai-launchkit
sudo bash scripts/update.shThe update script automatically:
- Pulls latest changes from GitHub
- Updates Docker images
- Restarts modified services
- Preserves all your data and configurations
If you want to add Airbyte:
- Increase inotify limits (see System Requirements above)
- Add Airbyte to your profiles:
nano .env # Add "airbyte" to COMPOSE_PROFILES # Example: COMPOSE_PROFILES="n8n,metabase,airbyte"
- Run update:
sudo bash scripts/update.sh
- Wait 15-30 minutes for Airbyte installation
- Access at
https://airbyte.yourdomain.com
For Fresh Installations
git clone https://github.com/freddy-schuetz/ai-launchkit.git
cd ai-launchkit
sudo bash scripts/install.shDuring installation wizard, select Airbyte when asked about services.
⚠️ Known Issues
-
Server Compatibility: Airbyte (Kind/Kubernetes) may not work on all servers
- ✅ Tested and working: Hetzner Cloud, AWS EC2, DigitalOcean, most KVM/dedicated servers
⚠️ May fail on: OpenVZ/LXC containers, heavily customized kernels- If installation fails with "unable to create kind cluster", see troubleshooting guide
-
502 Gateway Errors: If Caddy shows 502 after installation, update Caddyfile with correct Docker gateway IP (see README_Services.md)
🎯 Use Cases
Perfect for:
- 📈 Marketing agencies consolidating client data (Google Ads, Meta, TikTok)
- 🏢 SaaS companies syncing customer data from Stripe, Intercom, HubSpot
- 🛒 E-commerce businesses analyzing Shopify, WooCommerce data
- 📊 Data teams building automated ETL pipelines
- 💼 Consultants creating unified analytics dashboards
Integration Examples:
- Sync Google Ads → PostgreSQL → Metabase dashboards
- Stripe payments → n8n webhook → Slack notifications
- Meta Ads + Google Ads → Combined performance reports
- Automated daily marketing reports via n8n
🙏 Testing & Feedback
Tested on:
- ✅ Hetzner Cloud (Ubuntu 24.04, 32GB RAM)
- ✅ 24fire VPS (Ubuntu 24.04, 128GB RAM)
Please report any issues at: https://github.com/freddy-schuetz/ai-launchkit/issues
📖 Resources
- Installation Guide: [README.md](https://github.com/freddy-schuetz/ai-launchkit/blob/main/README.md)
- Services Documentation: [README_Services.md](https://github.com/freddy-schuetz/ai-launchkit/blob/main/README_Services.md)
- Airbyte Official Docs: https://docs.airbyte.com
- n8n Integration Examples: See README_Services.md → Airbyte section
🎊 Contributors
Special thanks to everyone testing and providing feedback on this major release!
Full Changelog: v1.14.0...v1.15.0
v1.14.0
🚀 AI LaunchKit v1.14.0 - Uptime Monitoring
✨ New Features
📊 Uptime Kuma - Beautiful Uptime Monitoring & Status Pages
- Professional uptime monitoring for all your AI LaunchKit services
- Multiple monitor types: HTTP/HTTPS, TCP Port, Ping, DNS, Docker Container, Keyword checks
- Public status pages for transparent service availability communication
- Push monitors for n8n workflow health tracking
- WebSocket support for real-time monitoring updates
- Own authentication system - first user becomes admin
- Docker container health monitoring via mounted Docker socket
- Profile:
uptime-kuma - Hostname:
status.yourdomain.com
🎨 Improvements
Landing Page v1.14.0
- Added Uptime Kuma to System Management section
- New "Testing & Development Tools" category
- Improved service organization and descriptions
- Updated version number
Infrastructure
- Docker socket mounting for container health monitoring
- WebSocket proxy configuration in Caddy for live updates
- Security headers (HSTS, X-Frame-Options, X-Content-Type-Options)
📦 Installation
New Installation
# Select uptime-kuma during installation wizard
bash scripts/install.shExisting Installation
# Update to v1.14.0
git pull origin main
sudo bash scripts/update.shThat's it! The update script handles everything automatically:
- ✅ Updates all services
- ✅ Adds new services to wizard
- ✅ Updates landing page
- ✅ Restarts changed services
- ✅ No manual .env editing required
🎯 Use Cases
Uptime Kuma Perfect For:
- Multi-service monitoring: Monitor all 60+ AI LaunchKit services from one dashboard
- Customer transparency: Create public status pages showing service availability
- n8n integration: Track workflow execution with push monitors
- Infrastructure monitoring: TCP ports, Docker containers, DNS resolution
- Content verification: Keyword checks ensure services return correct pages
- Certificate management: Get notified before SSL certificates expire
Monitoring Types Available:
- ✅ HTTP/HTTPS endpoints
- ✅ TCP port connectivity
- ✅ Docker container health
- ✅ DNS resolution
- ✅ ICMP ping
- ✅ Keyword/content presence
- ✅ Push heartbeats (for workflows)
📚 Documentation
- Setup Guide: First login creates admin account
- n8n Integration: Use Push monitors for workflow health tracking
- Status Pages: Create public transparency pages for customers
- API Access: Generate API tokens for automation
- Notification Channels: Discord, Telegram, Email, Slack, and 90+ more
🔗 Related Services
Monitoring Comparison:
- Grafana/Prometheus (Technical): CPU, RAM, container metrics
- Uptime Kuma (Business): Service availability, uptime %, response times
Both complement each other - use together for complete visibility!
🐛 Bug Fixes
- Improved Caddy configuration for WebSocket connections
- Enhanced security headers for monitoring services
🙏 Credits
- Uptime Kuma: louislam/uptime-kuma
📈 Statistics
- Total Services: 65+ integrated tools
- New in v1.14.0: 1 major service (Uptime Kuma)
- Categories: 16 service categories
Full Changelog: v1.13.0...v1.14.0
v1.13.0
📦 AI LaunchKit v1.13.0 Release Notes
🚀 Release: Webhook Testing Suite
Release Date: October 31, 2025
✨ New Features
Webhook Testing Suite
Complete testing and debugging toolkit for API development and n8n workflows:
-
[Webhook Tester](https://github.com/tarampampam/webhook-tester) - Debug incoming webhooks
- Receive and inspect webhooks in real-time
- Persistent storage with dedicated Redis instance
- Auto-create sessions for UUID-based URLs
- Protected with Basic Authentication
- Perfect for debugging n8n webhook triggers
-
[Hoppscotch](https://github.com/hoppscotch/hoppscotch) - Professional API testing platform
- REST, GraphQL, WebSocket, and SSE support
- Team collaboration features
- Collections and environments management
- All-in-One Docker image with integrated PostgreSQL
- Automatic database migration on startup
🔧 Improvements
- Enhanced Secret Generation: Automatic password hash generation for all Basic Auth protected services
- Improved Installation Wizard: Webhook Testing Suite added to service selection
- Better Documentation: Added comprehensive setup instructions for both tools
🛠️ Technical Details
- New Docker services:
webhook-tester,webhook-tester-redis,hoppscotch,hoppscotch-db,hoppscotch-migrate - Profile:
webhook-testing - Default URLs:
- Webhook Tester:
webhook-test.yourdomain.com - Hoppscotch:
api-test.yourdomain.com
- Webhook Tester:
📝 Configuration
New environment variables:
WEBHOOK_TESTER_HOSTNAME=webhook-test.yourdomain.com
WEBHOOK_TESTER_USERNAME= # Auto-set to user email
WEBHOOK_TESTER_PASSWORD= # Auto-generated
HOPPSCOTCH_HOSTNAME=api-test.yourdomain.com
HOPPSCOTCH_DB_PASSWORD= # Auto-generated
HOPPSCOTCH_JWT_SECRET= # Auto-generated
HOPPSCOTCH_SESSION_SECRET= # Auto-generated🐛 Known Issues
- Hoppscotch Community Edition authentication is limited (no email auth without complex setup)
- Tool is fully functional without login for basic API testing
📚 Documentation
- Updated README with tool descriptions
- Added troubleshooting guide for webhook testing
- Integration examples for n8n workflows
🙏 Acknowledgments
Special thanks to the open-source projects that make this possible:
- Webhook Tester by @tarampampam
- Hoppscotch team for their amazing API platform
Full Changelog: [v1.12.1...v1.13.0](v1.12.0...v1.13.0)
v1.12.1
v1.12.1 - Perplexica Migration to Official Image
🎯 What's New
Perplexica: Official Docker Image Migration
Perplexica now uses the official pre-built Docker image instead of building from source. This brings several improvements:
✅ Faster Installation
- No build time required (~5-10 minutes saved)
- Image pulls in seconds instead of building locally
✅ Simplified Configuration
- Web-UI based setup (no config.toml files)
- Configure AI providers through intuitive interface
- No container restarts needed for config changes
✅ Better User Experience
- Setup wizard on first login
- Easy provider switching (Ollama, OpenAI, Claude, Groq)
- Persistent configuration in Docker volumes
📝 Changes
Updated
- docker-compose.yml: Use
itzcrazykns1337/perplexica:latestimage - scripts/install.sh: Remove Perplexica build step
- scripts/06_final_report.sh: Update internal URLs and setup instructions
- README_Services.md: Document Web-UI configuration, remove config.toml references
Removed
- scripts/04a_setup_perplexica.sh: No longer needed (no build required)
- perplexica.config.toml: Configuration now via Web-UI
🔄 Migration for Existing Installations
If updating from a previous version:
- Run update:
sudo bash scripts/update.sh - Open Perplexica Web-UI:
https://perplexica.yourdomain.com - Complete setup wizard (re-enter API keys if needed)
Your chat history remains intact (stored in browser).
🐛 Bug Fixes
- Fixed internal API endpoint URLs (3001 → 3000)
- Removed obsolete build dependencies
📚 Documentation
- Updated Perplexica setup guide with Web-UI instructions
- Added troubleshooting for new architecture
- Improved n8n integration examples
Full Changelog: v1.12.0...v1.12.1