-
Notifications
You must be signed in to change notification settings - Fork 278
feat(providers): add Groq as new LLM provider #2586
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Add Groq provider types (api.ts, messages.ts, tools.ts, index.ts) - Add Groq to SupportedProviders schema - Add Groq to model-constants with fastest/best model patterns - Add Groq client integration with @ai-sdk/groq - Add Groq config (API key and base URL) Groq is OpenAI-compatible and uses LPU inference for fast responses. Base URL: https://api.groq.com/openai/v1 Relates to archestra-ai#1856
- Add groq to proxied model creators - Add groq to model fetchers with fetchGroqModels function - Add groq to seed data (env vars, display names) - Add groq to optimization rules (prices, rules) - Add groq to error handlers (using OpenAI-compatible parsing) - Add groq to API key fallbacks All TypeScript type checks pass. Relates to archestra-ai#1856
Work in progress - adding Groq provider configs to e2e test suite.
- tool-invocation.spec.ts - tool-persistence.spec.ts - token-cost-limits.spec.ts - model-optimization.spec.ts - tool-result-compression.spec.ts Groq uses OpenAI-compatible API with LPU inference.
Includes: - OpenAI-compatible API details - Base URL and authentication format - Environment variables - Popular models - Getting an API key link
|
@CLAassistant check |
1 similar comment
|
@CLAassistant check |
|
recheck |
joeyorlando
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hi there 👋
Thanks for your contribution. I'm going to close out this PR. It is missing a short demo-video, and the e2e tests will not pass (as there are no wiremock .json response files added for groq).
Please see our Contribution Guidelines, in particular the "Contribute Responsibly" section.
Summary
Adds Groq as a new LLM provider. Groq provides ultra-fast inference using custom LPU hardware with an OpenAI-compatible API.
Changes
Type Definitions
src/types/llm-providers/groq/- Full type definitions (api, messages, tools)Backend Integration
proxiedModelCreators,preferredSourcePrefixes,providerEnvVars,displayNames,pricesByProvider,rulesByProvider,providerParsers,providerMappers,envApiKeyFallbacks,modelFetchersfetchGroqModelsfunctionShared Constants
groqto SupportedProviders enum and discriminatorsE2E Tests
Added Groq configs to all LLM proxy test files:
Documentation
platform-supported-llm-providers.mdAPI Key
Get free API key at console.groq.com/keys
Environment Variables
ARCHESTRA_GROQ_BASE_URLhttps://api.groq.com/openai/v1ARCHESTRA_CHAT_GROQ_API_KEYTesting
pnpm type-check)Notes
Groq uses OpenAI-compatible API, so error handling reuses existing OpenAI parsers/mappers. All integration points follow established patterns.
Closes #1856