Slimmed, cleaned and fine-tuned oh-my-opencode fork, consumes much less tokens
-
Updated
Feb 11, 2026 - TypeScript
Slimmed, cleaned and fine-tuned oh-my-opencode fork, consumes much less tokens
🦖 X—LLM: Cutting Edge & Easy LLM Finetuning
A Python deep learning framework with lazy evaluation, automatic differentiation, and a PyTorch-like API. Features include neural network modules, data loading, training utilities, model serving, and integrations with MLflow, W&B, ONNX, and Jupyter.
PHP library for interacting with AI platform provider.
A robust Node.js proxy server that automatically rotates API keys for Gemini and OpenAI APIs when rate limits (429 errors) are encountered. Built with zero dependencies and comprehensive logging.
A tool keep tabs on your Cerebras Code usage limits, in real time
Ultra-fast, customizable AI voice dictation in any active app on Windows (MacOS and Linux coming soon)
🦖 X—LLM: Simple & Cutting Edge LLM Finetuning
This repository features an example of how to utilize the xllm library. Included is a solution for a common type of assessment given to LLM engineers
Matrix decomposition and multiplication on the Cerebras Wafer-Scale Engine (WSE) architecture
A solution that could prioritize patients based on urgency, reducing wait times and ensuring those who need immediate care.
🚀 MCP Gateway with Semantic Routing — One API for all your MCP tools. Natural language in → right tool executed. Blazing fast (Cerebras) + always reliable (multi-LLM fallback).
AI-powered geopolitical news intelligence platform. Ingests 100K+ daily events from GDELT, stores in MotherDuck (DuckDB), orchestrates with Dagster, and features an AI chat interface with Text-to-SQL. Full data engineering stack at $0/month.
Conversation AI model for open domain dialogs
Add a description, image, and links to the cerebras topic page so that developers can more easily learn about it.
To associate your repository with the cerebras topic, visit your repo's landing page and select "manage topics."