The open-source LLMOps platform: prompt playground, prompt management, LLM evaluation, and LLM observability all in one place.
-
Updated
Feb 22, 2026 - TypeScript
The open-source LLMOps platform: prompt playground, prompt management, LLM evaluation, and LLM observability all in one place.
๐๏ธ Voice-native document intelligence using Gemini, ElevenLabs STT/TTS, and Datadog observability โ turning text documents into spoken conversations.
Zero-code LLM security & observability proxy. Real-time prompt injection detection, PII scanning, and cost control for OpenAI-compatible APIs. Built in Rust.
AI Chat Watch (AICW) - free open-source tool for GEO marketers that track what & how AI mentions brands, products, companies.
OpenTelemetry wrapper for Claude Code CLI that logs tool calls, token usage, costs, and execution traces to Logfire, Sentry, Honeycomb, or Datadog. Drop-in replacement that swaps 'claude' command for 'claudia'.
Create an evaluation framework for your LLM based app. Incorporate it into your test suite. Lay the monitoring foundation.
Where did your tokens go? Spans, latency percentiles, alerts.
AI model health monitor for LLM apps โ runtime checks for drift, hallucination risk, latency, and JSON/format quality on any OpenAI, Anthropic, or local client.
Real-time LLM token & cost monitoring for OpenClaw agents with budget tracking, optimization hints and REST API for agent self-improvement.
OpenLLM Monitor ๐ is a plug-and-play, real-time observability dashboard ๐ for monitoring and debugging LLM API calls across OpenAI ๐ค, Ollama ๐ฆ, OpenRouter ๐, and more. It tracks tokens ๐งฎ, latency โฑ๏ธ, cost ๐ธ, retries ๐, and lets you replay prompts ๐. Fully open-source ๐ and self-hostable ๐ ๏ธ.
The hundred-eyed watcher for your LLM providers. Monitor uptime, TTFT, TPS, and latency across OpenAI, Anthropic, Azure, Bedrock, Ollama, LM Studio, and 100+ providers through a single dashboard. Benchmark, compare, and get alerts โ all self-hosted.
ModelPulse helps maintain model reliability and performance by providing early warning signals for these issues, allowing teams to address them before they impact users significantly.
tmam is an open-source observability platform that gives you deep, real-time visibility into your entire AI stack โ from every LLM call and agent trace to GPU utilization and vector database performance.
Real-time observability for AI agents. Track costs, monitor errors, replay prompts
Advanced Real-time HUD for Gemini CLI. Resource & Context Monitoring.
It shows how to monitoring LLM.
๐ก๏ธ Verify AI outputs with llmverify for Node.js, ensuring safety and accuracy without sacrificing privacy.
๐๏ธ Transform documents into interactive conversations using voice with VoiceDoc Agent, powered by Google Cloud, ElevenLabs, and Datadog.
๐ Detect subtle shifts in AI model performance with ModelPulse, ensuring consistent outputs and enhancing user experience over time.
๐ Observe and route LLM apps with Brokle, your open-source solution for production-grade AI observability and transparent infrastructure management.
Add a description, image, and links to the llm-monitoring topic page so that developers can more easily learn about it.
To associate your repository with the llm-monitoring topic, visit your repo's landing page and select "manage topics."