Skip to content
#

llm-monitoring

Here are 20 public repositories matching this topic...

OpenLLM Monitor ๐Ÿ“Š is a plug-and-play, real-time observability dashboard ๐Ÿ” for monitoring and debugging LLM API calls across OpenAI ๐Ÿค–, Ollama ๐Ÿฆ™, OpenRouter ๐ŸŒ, and more. It tracks tokens ๐Ÿงฎ, latency โฑ๏ธ, cost ๐Ÿ’ธ, retries ๐Ÿ”, and lets you replay prompts ๐Ÿ”„. Fully open-source ๐ŸŒ and self-hostable ๐Ÿ› ๏ธ.

  • Updated Jun 26, 2025
  • JavaScript

The hundred-eyed watcher for your LLM providers. Monitor uptime, TTFT, TPS, and latency across OpenAI, Anthropic, Azure, Bedrock, Ollama, LM Studio, and 100+ providers through a single dashboard. Benchmark, compare, and get alerts โ€” all self-hosted.

  • Updated Feb 16, 2026
  • Python

ModelPulse helps maintain model reliability and performance by providing early warning signals for these issues, allowing teams to address them before they impact users significantly.

  • Updated Jan 20, 2026
  • Python

Improve this page

Add a description, image, and links to the llm-monitoring topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the llm-monitoring topic, visit your repo's landing page and select "manage topics."

Learn more