feat: add MiniMax provider support#2527
Conversation
|
Hey, I’ve got the MiniMax provider working, but I’m hitting a wall with the logs , it’s sometimes not showing up in the logs no matter what I try. Also, I noticed /profiles is gone, is that part of some shift? Would love some quick guidance on how to fix these issues! |
There was a problem hiding this comment.
Pull request overview
Adds end-to-end MiniMax provider support across the proxy, chat UI, typed schemas/OpenAPI, pricing, and e2e tests.
Changes:
- Introduces MiniMax proxy routes + v2 adapter (streaming, tool calls, token estimation, error mapping).
- Adds MiniMax provider types/schemas and propagates the provider through shared constants, API client types, and OpenAPI.
- Adds chat UI integration, pricing migration, CI config, and e2e tests + WireMock mappings.
Reviewed changes
Copilot reviewed 60 out of 62 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| platform/shared/routes.ts | Adds MiniMax route IDs for proxy endpoints. |
| platform/shared/model-constants.ts | Registers minimax provider, display name, and model marker patterns. |
| platform/shared/hey-api/clients/api/types.gen.ts | Adds generated MiniMax request/response types and extends provider unions. |
| platform/shared/hey-api/clients/api/sdk.gen.ts | Adds generated SDK methods for MiniMax chat completions endpoints. |
| platform/shared/hey-api/clients/api/index.ts | Re-exports MiniMax SDK functions and types. |
| platform/shared/chat-error.ts | Adds MiniMax error type constants. |
| platform/helm/e2e-tests/mappings/minimax-tool-persistence.json | WireMock stub for tool persistence scenario. |
| platform/helm/e2e-tests/mappings/minimax-token-cost-limit-test.json | WireMock stub for token/cost limits scenario. |
| platform/helm/e2e-tests/mappings/minimax-models-list.json | WireMock stub for model listing (used by tests). |
| platform/helm/e2e-tests/mappings/minimax-model-optimization-with-tools.json | WireMock stub for optimization with tools. |
| platform/helm/e2e-tests/mappings/minimax-model-optimization-short.json | WireMock stub for optimization short prompt scenario. |
| platform/helm/e2e-tests/mappings/minimax-model-optimization-no-tools.json | WireMock stub for optimization without tools. |
| platform/helm/e2e-tests/mappings/minimax-model-optimization-long.json | WireMock stub for optimization long prompt scenario. |
| platform/helm/e2e-tests/mappings/minimax-model-optimization-disabled.json | WireMock stub for optimization disabled scenario. |
| platform/helm/e2e-tests/mappings/minimax-compression-enabled.json | WireMock stub for TOON compression enabled scenario. |
| platform/helm/e2e-tests/mappings/minimax-compression-disabled.json | WireMock stub for TOON compression disabled scenario. |
| platform/helm/e2e-tests/mappings/minimax-blocks-tool-untrusted-data.json | WireMock stub for tool invocation policy blocking scenario. |
| platform/helm/e2e-tests/mappings/minimax-allows-regular-after-archestra.json | WireMock stub for mixed tool call sequence scenario. |
| platform/helm/e2e-tests/mappings/minimax-allows-archestra-untrusted-context.json | WireMock stub for untrusted context ordering scenario. |
| platform/frontend/src/lib/llmProviders/minimax.ts | Adds UI interaction mapper for MiniMax interactions. |
| platform/frontend/src/lib/interaction.utils.ts | Wires MiniMax interaction type into dynamic interaction handling. |
| platform/frontend/src/components/proxy-connection-instructions.tsx | Adds MiniMax proxy connection instructions (base URL + label). |
| platform/frontend/src/components/chat/model-selector.tsx | Adds MiniMax logo provider mapping for model selector. |
| platform/frontend/src/components/chat-api-key-form.tsx | Adds MiniMax provider config (name/icon/link/placeholder). |
| platform/e2e-tests/tests/api/llm-proxy/tool-result-compression.spec.ts | Includes MiniMax in shared compression test suite. |
| platform/e2e-tests/tests/api/llm-proxy/tool-persistence.spec.ts | Includes MiniMax in shared tool persistence test suite. |
| platform/e2e-tests/tests/api/llm-proxy/tool-invocation.spec.ts | Includes MiniMax in shared tool invocation policy tests. |
| platform/e2e-tests/tests/api/llm-proxy/token-cost-limits.spec.ts | Includes MiniMax in shared token/cost limit tests. |
| platform/e2e-tests/tests/api/llm-proxy/model-optimization.spec.ts | Includes MiniMax in shared model optimization tests. |
| platform/backend/src/types/llm-providers/minimax/tools.ts | Adds Zod schemas for MiniMax tools + tool choice. |
| platform/backend/src/types/llm-providers/minimax/messages.ts | Adds Zod schemas for MiniMax messages/tool calls/reasoning details. |
| platform/backend/src/types/llm-providers/minimax/index.ts | Exposes MiniMax schemas/types including stream chunk typing. |
| platform/backend/src/types/llm-providers/minimax/api.ts | Adds Zod request/response/headers schemas for MiniMax. |
| platform/backend/src/types/llm-providers/index.ts | Exports MiniMax provider types from the provider index. |
| platform/backend/src/types/interaction.ts | Adds MiniMax to interaction request/response unions + discriminated union. |
| platform/backend/src/types/chat-api-key.ts | Allows minimax as a supported chat API key provider. |
| platform/backend/src/tokenizers/index.ts | Selects tokenizer for MiniMax (tiktoken). |
| platform/backend/src/tokenizers/base.ts | Extends ProviderMessage union to include MiniMax message shape. |
| platform/backend/src/server.ts | Registers MiniMax schemas in OpenAPI registry. |
| platform/backend/src/routes/proxy/utils/cost-optimization.ts | Adds MiniMax message typing for cost optimization utilities. |
| platform/backend/src/routes/proxy/utils/adapters/minimax.ts | Adds legacy metrics adapter helper for MiniMax usage extraction. |
| platform/backend/src/routes/proxy/routesv2/minimax.ts | Adds MiniMax unified proxy routes and optional passthrough proxying. |
| platform/backend/src/routes/proxy/adapterV2/minimax.ts | Implements MiniMax v2 adapter: request/response/stream, SSE parsing, token estimation, tool compression, error extraction. |
| platform/backend/src/routes/proxy/adapterV2/index.ts | Exports MiniMax adapter factory. |
| platform/backend/src/routes/index.ts | Registers MiniMax proxy routes in backend routing. |
| platform/backend/src/routes/features.ts | Adds minimaxEnabled flag to feature endpoint response. |
| platform/backend/src/routes/chat/routes.models.ts | Adds MiniMax model fetcher (hardcoded models) and provider api key fallback. |
| platform/backend/src/routes/chat/routes.chat.ts | Adds MiniMax to smart default model selection. |
| platform/backend/src/routes/chat/errors.ts | Adds MiniMax error parsing + error-code mapping. |
| platform/backend/src/models/optimization-rule.ts | Adds MiniMax key to optimization rule provider maps. |
| platform/backend/src/llm-metrics.ts | Adds token reporting for MiniMax responses when usage exists. |
| platform/backend/src/database/migrations/meta/_journal.json | Registers the new MiniMax token price migration. |
| platform/backend/src/database/migrations/0131_add_minimax_token_prices.sql | Adds MiniMax token pricing rows. |
| platform/backend/src/config.ts | Adds MiniMax proxy and chat config knobs (base URLs, API key). |
| platform/backend/src/clients/models-dev-client.ts | Maps models.dev minimax to internal provider and source matching. |
| platform/backend/src/clients/llm-client.ts | Adds MiniMax provider detection, fast model, and model creators (direct + proxied). |
| platform/backend/src/clients/dual-llm-client.ts | Adds a MiniMax dual LLM client using OpenAI SDK compatibility. |
| docs/pages/platform-supported-llm-providers.md | Documents MiniMax provider setup and behavior. |
| docs/openapi.json | Updates generated OpenAPI with MiniMax schemas and endpoints. |
| .vscode/settings.json | Adds VS Code setting for Postman dotenv detection notification. |
| .github/values-ci.yaml | Adds MiniMax env vars for CI/e2e configuration. |
|
Applied all suggestions by copilot! |
|
hi there @Rutetid 👋 I just merged #2610 which simplified a few parts regarding adding a new LLM provider - this introduced a few merge conflicts in your PR, do you mind rebasing off of latest Additionally, can you run |
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
09fe5c4 to
a5a0167
Compare
|
Hi @joeyorlando I've rebased onto the latest main to include the #2610 refactors. I've also confirmed that pnpm check:ci is passing with all tests green. Ready for final review! |
There was a problem hiding this comment.
we don't need a database migration
There was a problem hiding this comment.
alright ill remove it
There was a problem hiding this comment.
I removed the migration , and now the values will default to $30 for cheaper models and $50 for others as specified in the file backend/src/default-model-prices.ts
[Provider] Add MiniMax Support
Closes #1855
/claim #1855
Summary
This PR adds complete support for MiniMax AI
Changes
Backend Integration
Core Implementation:
✅ Adapter (backend/src/routes/proxy/adapterV2/minimax.ts)
usage: null)reasoning_detailsarray✅ Type Definitions (backend/src/types/llm-providers/minimax/ - 4 files)
reasoning_detailsthinking contentrole: ""(empty string) - MiniMax quirk✅ Routes (backend/src/routes/proxy/routesv2/minimax.ts)
✅ Database Migration (backend/src/database/migrations/0131_add_minimax_token_prices.sql)
✅ Models Dev Client Integration (backend/src/clients/models-dev-client.ts)
Frontend Integration
✅ Interaction Handler (frontend/src/lib/llmProviders/minimax.ts)
✅ UI Components
Key MiniMax Differences
1. No Models Endpoint
MiniMax doesn't provide a
/v1/modelsendpoint. We use a hardcoded list:2. Streaming Usage Data
Problem: MiniMax streaming API returns
"usage": nullin all chunks (no token counts)Solution: Implemented token estimation using tiktoken that:
3. Reasoning Details
MiniMax supports extended thinking with
reasoning_detailsarray:extra_body: { reasoning_split: true }4. Stream Chunk Role Field
MiniMax sends
"role": ""(empty string) in some stream chunks instead of"role": "assistant". Type schema updated to allow both.Streaming Support
Feature Completeness
LLM Proxy ✅
Chat ✅
Testing Infrastructure
tool-invocation.spec.ts)tool-persistence.spec.ts)tool-result-compression.spec.ts)model-optimization.spec.ts)token-cost-limits.spec.ts)helm/e2e-tests/mappings/).github/values-ci.yaml)API Key Instructions
Obtaining an API Key
Visit MiniMax Platform and generate an API key (a minimum of $25 recharge is required to use the API)
Demo Video
minimax.mp4
Testing Done
curlto actual APIcurlto actual APIpnpm lintpassespnpm type-checkpassesDocumentation
Updated
platform-supported-llm-providers.md