feat: add MiniMax as a first-class LLM provider (M2.7/M2.5, 204K context)#153
feat: add MiniMax as a first-class LLM provider (M2.7/M2.5, 204K context)#153octo-patch wants to merge 1 commit intoMirrowel:mainfrom
Conversation
Adds MiniMaxProvider to the rotator library so the proxy can
auto-discover and rotate MiniMax API keys out of the box.
Changes
-------
* src/rotator_library/providers/minimax_provider.py (new)
- MiniMaxProvider.get_models() fetches live model list from
https://api.minimax.io/v1/models and falls back to static list
(MiniMax-M2.7, MiniMax-M2.7-highspeed, MiniMax-M2.5,
MiniMax-M2.5-highspeed, all 204K context) when unreachable.
- has_custom_logic() = True + acompletion() clamps temperature <= 0
to 0.01 (MiniMax rejects temperature=0); strips internal proxy keys.
* src/proxy_app/provider_urls.py
- Added minimax: https://api.minimax.io/v1 to PROVIDER_URL_MAP.
* .env.example
- Added MiniMax configuration block.
* tests/test_minimax_provider.py (new)
- 18 unit tests + 3 integration tests.
📝 WalkthroughSummary by CodeRabbit
WalkthroughThe pull request integrates MiniMax AI provider support by adding environment configuration, API endpoint mapping, a new Changes
Sequence DiagramsequenceDiagram
participant Caller
participant MiniMaxProvider
participant LiteLLM
participant MiniMaxAPI as MiniMax API
Caller->>MiniMaxProvider: acompletion(temperature=0, ...)
MiniMaxProvider->>MiniMaxProvider: Clamp temperature if ≤ 0
MiniMaxProvider->>MiniMaxProvider: Remove internal keys<br/>(credential_identifier, transaction_context)
MiniMaxProvider->>LiteLLM: acompletion(temperature=0.01, ...)
LiteLLM->>MiniMaxAPI: POST /chat/completions
MiniMaxAPI-->>LiteLLM: ModelResponse
LiteLLM-->>MiniMaxProvider: ModelResponse
MiniMaxProvider-->>Caller: ModelResponse
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~50 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
| Filename | Overview |
|---|---|
| src/rotator_library/providers/minimax_provider.py | New MiniMaxProvider with dynamic model discovery and temperature clamping; hardcodes the model discovery URL so MINIMAX_API_BASE overrides are silently ignored during get_models calls. |
| tests/test_minimax_provider.py | Comprehensive unit + integration test suite (18 unit, 3 integration); one "integration" test patches litellm and never makes a live call, making it a misplaced unit test. |
| .env.example | Adds MiniMax env var documentation; comment incorrectly states temperature=0 is converted to 1.0 when the provider actually clamps to 0.01. |
| src/proxy_app/provider_urls.py | Adds minimax → https://api.minimax.io/v1 to PROVIDER_URL_MAP; straightforward one-liner, no issues. |
| pytest.ini | New pytest config enabling strict asyncio mode and registering the integration marker; clean addition. |
| tests/conftest.py | Adds src/ to sys.path so tests can import without installing the package; standard pattern, no issues. |
Sequence Diagram
sequenceDiagram
participant Proxy as Proxy App
participant Rotator as Rotator Library
participant MM as MiniMaxProvider
participant LiteLLM as LiteLLM
participant API as MiniMax API
Proxy->>Rotator: request (model, messages, temperature, ...)
Rotator->>MM: has_custom_logic()?
MM-->>Rotator: True
Rotator->>MM: acompletion(**kwargs)
note over MM: Clamp temperature ≤ 0 → 0.01<br/>Strip credential_identifier,<br/>transaction_context
MM->>LiteLLM: acompletion(**kwargs)
LiteLLM->>API: POST /v1/chat/completions
API-->>LiteLLM: ModelResponse
LiteLLM-->>MM: ModelResponse
MM-->>Rotator: ModelResponse
Rotator-->>Proxy: ModelResponse
note over MM,API: Model Discovery (startup / refresh)
Rotator->>MM: get_models(api_key, client)
MM->>API: GET /v1/models (hardcoded URL — ignores MINIMAX_API_BASE)
alt API reachable
API-->>MM: {data: [{id: ...}, ...]}
MM-->>Rotator: ["minimax/MiniMax-M2.7", ...]
else API unreachable / empty
MM-->>Rotator: static fallback list
end
Reviews (1): Last reviewed commit: "feat: add MiniMax as a first-class LLM p..." | Re-trigger Greptile
| response = await client.get( | ||
| "https://api.minimax.io/v1/models", | ||
| headers={"Authorization": f"Bearer {api_key}"}, |
There was a problem hiding this comment.
MINIMAX_API_BASE ignored in model discovery
get_models hardcodes https://api.minimax.io/v1/models as the endpoint URL. This means that when a user sets MINIMAX_API_BASE to a custom URL (e.g. a private deployment or staging endpoint), model discovery will still hit the public MiniMax API rather than the overridden base. The resulting 404 / connection error silently falls back to the static model list, making the env-var override appear broken for model discovery.
| response = await client.get( | |
| "https://api.minimax.io/v1/models", | |
| headers={"Authorization": f"Bearer {api_key}"}, | |
| base_url = os.getenv("MINIMAX_API_BASE", "https://api.minimax.io/v1").rstrip("/") | |
| response = await client.get( | |
| f"{base_url}/models", | |
| headers={"Authorization": f"Bearer {api_key}"}, | |
| ) |
You'll also need to add import os at the top if it is not already present.
| # Note: MiniMax requires temperature > 0. Set OVERRIDE_TEMPERATURE_ZERO=set (below) | ||
| # to automatically convert temperature=0 requests to temperature=1.0. |
There was a problem hiding this comment.
Incorrect temperature value in comment
The comment says OVERRIDE_TEMPERATURE_ZERO will convert temperature=0 requests to 1.0, but the MiniMax provider actually clamps to 0.01 (_MINIMAX_TEMPERATURE_MIN). Users relying on this comment will be surprised when they observe 0.01 in practice.
| # Note: MiniMax requires temperature > 0. Set OVERRIDE_TEMPERATURE_ZERO=set (below) | |
| # to automatically convert temperature=0 requests to temperature=1.0. | |
| # Note: MiniMax requires temperature > 0. Set OVERRIDE_TEMPERATURE_ZERO=set (below) | |
| # to automatically convert temperature=0 requests to 0.01. |
|
|
||
| import httpx | ||
| import logging | ||
| from typing import List, Dict, Any, AsyncGenerator, Union |
| import pytest | ||
| import pytest_asyncio |
There was a problem hiding this comment.
pytest_asyncio is imported but never referenced in the test file. The async test decorator used throughout is @pytest.mark.asyncio (from pytest), not from pytest_asyncio directly.
| import pytest | |
| import pytest_asyncio | |
| import pytest | |
| import os | |
| from unittest.mock import AsyncMock, MagicMock, patch |
| @pytest.mark.asyncio | ||
| async def test_live_temperature_clamping(self): | ||
| """Sending temperature=0 through the provider should not raise an error.""" | ||
| import httpx | ||
| from rotator_library.providers.minimax_provider import MiniMaxProvider | ||
|
|
||
| provider = MiniMaxProvider() | ||
|
|
||
| with patch("litellm.acompletion") as mock_acompletion: | ||
| mock_acompletion.return_value = MagicMock() | ||
| await provider.acompletion( | ||
| MagicMock(), | ||
| model="minimax/MiniMax-M2.7", | ||
| messages=[{"role": "user", "content": "hi"}], | ||
| temperature=0, | ||
| ) | ||
| called_kwargs = mock_acompletion.call_args.kwargs | ||
| assert called_kwargs["temperature"] > 0 |
There was a problem hiding this comment.
Integration test patches
litellm — no live call is ever made
test_live_temperature_clamping uses patch("litellm.acompletion"), so the real MiniMax API is never contacted even when MINIMAX_API_KEY is set. The test is functionally identical to the unit tests in TestTemperatureClamping and doesn't benefit from living in the integration class (it consumes the required API key fixture but never uses self.api_key).
Consider either removing it from the integration class and adding it to TestTemperatureClamping, or replacing it with a true live call that actually validates MiniMax accepts temperature=0.01.
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.env.example:
- Around line 58-60: The .env.example currently references
OVERRIDE_TEMPERATURE_ZERO but doesn't define it or explain how it interacts with
the provider's 0.01 clamp; add a commented example line for
OVERRIDE_TEMPERATURE_ZERO (e.g., OVERRIDE_TEMPERATURE_ZERO=set or =1) near the
other provider keys (like MINIMAX_API_KEY_1) and include a short comment
explaining that when set it will convert incoming temperature=0 requests to
temperature=1.0 (override) while providers still clamp minimum values to 0.01,
so setting this enables the MiniMax-specific workaround; update the surrounding
note to point to that example line and clarify the relationship to the provider
clamp.
In `@src/proxy_app/provider_urls.py`:
- Line 33: PROVIDER_URL_MAP currently hardcodes "minimax", which prevents
honoring the MINIMAX_API_BASE env var; change the logic so get_provider_endpoint
checks for an environment override (key f"{provider.upper()}_API_BASE" or
specifically MINIMAX_API_BASE for provider "minimax") before falling back to
PROVIDER_URL_MAP, or remove the "minimax" hardcoded entry and instead have
PROVIDER_URL_MAP supply only defaults used when the env var is absent; update
get_provider_endpoint (and any lookup code that uses PROVIDER_URL_MAP) to prefer
os.environ.get(f"{provider.upper()}_API_BASE") and return that if present,
otherwise use PROVIDER_URL_MAP["minimax"].
In `@src/rotator_library/providers/minimax_provider.py`:
- Around line 77-80: The model discovery call currently hardcodes
"https://api.minimax.io/v1/models"; update the code that performs the GET (the
call to client.get in minimax_provider.py that fetches models) to build the URL
from the provider's overridable base (MINIMAX_API_BASE or the provider config
variable used elsewhere for completions) instead of the hardcoded host—e.g.
derive api_base = MINIMAX_API_BASE or the existing config value, normalize to
strip a trailing slash, then call f"{api_base}/v1/models" with the same
Authorization header so discovery uses the same gateway as completions and does
not leak the bearer token to the public endpoint.
In `@tests/test_minimax_provider.py`:
- Around line 404-413: The test
TestMiniMaxIntegration.test_live_temperature_clamping currently patches
litellm.acompletion so it never exercises provider.acompletion end-to-end;
either convert it into a true integration test by removing the patch of
litellm.acompletion, invoking provider.acompletion the same way
test_live_chat_completion does (ensure a real MiniMax API key is available via
self.api_key or test fixture) and assert the live request/response respects
clamped temperature, or move the test into the unit test class
TestTemperatureClamping and keep the existing patch but rename it to reflect
unit-level behavior; update references to litellm.acompletion and
provider.acompletion accordingly so the test location and setup match its
intent.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: edeca81e-3f0d-4c34-b57f-b0c2802320ed
📒 Files selected for processing (6)
.env.examplepytest.inisrc/proxy_app/provider_urls.pysrc/rotator_library/providers/minimax_provider.pytests/conftest.pytests/test_minimax_provider.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Greptile Review
- GitHub Check: review-pr
🧰 Additional context used
🪛 Ruff (0.15.7)
src/rotator_library/providers/minimax_provider.py
[warning] 6-6: Import from collections.abc instead: AsyncGenerator
Import from collections.abc
(UP035)
[warning] 6-6: typing.List is deprecated, use list instead
(UP035)
[warning] 6-6: typing.Dict is deprecated, use dict instead
(UP035)
[warning] 41-41: Docstring contains ambiguous – (EN DASH). Did you mean - (HYPHEN-MINUS)?
(RUF002)
[warning] 45-45: Docstring contains ambiguous – (EN DASH). Did you mean - (HYPHEN-MINUS)?
(RUF002)
[warning] 86-86: Logging statement uses f-string
(G004)
[warning] 91-91: Logging statement uses f-string
(G004)
[warning] 93-93: Do not catch blind exception: Exception
(BLE001)
[warning] 95-95: Logging statement uses f-string
(G004)
[warning] 101-101: Logging statement uses f-string
(G004)
[warning] 118-118: Unused method argument: client
(ARG002)
[warning] 119-119: Dynamically typed expressions (typing.Any) are disallowed in **kwargs
(ANN401)
[warning] 120-120: Use X | Y for type annotations
Convert to X | Y
(UP007)
[warning] 132-133: Logging statement uses f-string
(G004)
tests/test_minimax_provider.py
[warning] 20-20: Missing return type annotation for private function _make_models_response
(ANN202)
[warning] 26-26: Comment contains ambiguous – (EN DASH). Did you mean - (HYPHEN-MINUS)?
(RUF003)
[warning] 150-150: Comment contains ambiguous – (EN DASH). Did you mean - (HYPHEN-MINUS)?
(RUF003)
[warning] 174-174: Missing return type annotation for private function fake_acompletion
(ANN202)
[warning] 174-174: Missing type annotation for **kw
(ANN003)
[warning] 199-199: Missing return type annotation for private function fake_acompletion
(ANN202)
[warning] 199-199: Missing type annotation for **kw
(ANN003)
[warning] 221-221: Missing return type annotation for private function fake_acompletion
(ANN202)
[warning] 221-221: Missing type annotation for **kw
(ANN003)
[warning] 243-243: Missing return type annotation for private function fake_acompletion
(ANN202)
[warning] 243-243: Missing type annotation for **kw
(ANN003)
[warning] 264-264: Missing return type annotation for private function fake_acompletion
(ANN202)
[warning] 264-264: Missing type annotation for **kw
(ANN003)
[warning] 282-282: Comment contains ambiguous – (EN DASH). Did you mean - (HYPHEN-MINUS)?
(RUF003)
[warning] 330-330: Comment contains ambiguous – (EN DASH). Did you mean - (HYPHEN-MINUS)?
(RUF003)
[warning] 346-346: Comment contains ambiguous – (EN DASH). Did you mean - (HYPHEN-MINUS)?
(RUF003)
[warning] 365-365: String contains ambiguous – (EN DASH). Did you mean - (HYPHEN-MINUS)?
(RUF001)
| # Note: MiniMax requires temperature > 0. Set OVERRIDE_TEMPERATURE_ZERO=set (below) | ||
| # to automatically convert temperature=0 requests to temperature=1.0. | ||
| #MINIMAX_API_KEY_1="YOUR_MINIMAX_API_KEY" |
There was a problem hiding this comment.
Make the temperature override hint actionable.
This note tells users to set OVERRIDE_TEMPERATURE_ZERO "below", but the example file never actually shows that variable. As written, the workaround is not discoverable, and it does not explain how the global override relates to the provider's 0.01 clamp.
Suggested fix
# Note: MiniMax requires temperature > 0. Set OVERRIDE_TEMPERATURE_ZERO=set (below)
# to automatically convert temperature=0 requests to temperature=1.0.
+#OVERRIDE_TEMPERATURE_ZERO=set
`#MINIMAX_API_KEY_1`="YOUR_MINIMAX_API_KEY"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Note: MiniMax requires temperature > 0. Set OVERRIDE_TEMPERATURE_ZERO=set (below) | |
| # to automatically convert temperature=0 requests to temperature=1.0. | |
| #MINIMAX_API_KEY_1="YOUR_MINIMAX_API_KEY" | |
| # Note: MiniMax requires temperature > 0. Set OVERRIDE_TEMPERATURE_ZERO=set (below) | |
| # to automatically convert temperature=0 requests to temperature=1.0. | |
| `#OVERRIDE_TEMPERATURE_ZERO`=set | |
| `#MINIMAX_API_KEY_1`="YOUR_MINIMAX_API_KEY" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.env.example around lines 58 - 60, The .env.example currently references
OVERRIDE_TEMPERATURE_ZERO but doesn't define it or explain how it interacts with
the provider's 0.01 clamp; add a commented example line for
OVERRIDE_TEMPERATURE_ZERO (e.g., OVERRIDE_TEMPERATURE_ZERO=set or =1) near the
other provider keys (like MINIMAX_API_KEY_1) and include a short comment
explaining that when set it will convert incoming temperature=0 requests to
temperature=1.0 (override) while providers still clamp minimum values to 0.01,
so setting this enables the MiniMax-specific workaround; update the surrounding
note to point to that example line and clarify the relationship to the provider
clamp.
| "cohere": "https://api.cohere.ai/v1", | ||
| "bedrock": "https://bedrock-runtime.us-east-1.amazonaws.com", | ||
| "openrouter": "https://openrouter.ai/api/v1", | ||
| "minimax": "https://api.minimax.io/v1", |
There was a problem hiding this comment.
Honor MINIMAX_API_BASE before the hardcoded default.
get_provider_endpoint() resolves PROVIDER_URL_MAP before checking <PROVIDER>_API_BASE, so adding minimax here makes the documented MINIMAX_API_BASE setting unreachable. That conflicts with .env.example and src/rotator_library/litellm_providers.py:593-602, and it can send requests to the wrong host whenever a custom MiniMax base is configured.
Suggested fix
- # First, check the hardcoded map
- base_url = PROVIDER_URL_MAP.get(provider)
-
- # If not found, check for custom provider via environment variable
- if not base_url:
- api_base_env = f"{provider.upper()}_API_BASE"
- base_url = os.getenv(api_base_env)
- if not base_url:
- return None
+ api_base_env = f"{provider.upper()}_API_BASE"
+ base_url = os.getenv(api_base_env) or PROVIDER_URL_MAP.get(provider)
+ if not base_url:
+ return None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/proxy_app/provider_urls.py` at line 33, PROVIDER_URL_MAP currently
hardcodes "minimax", which prevents honoring the MINIMAX_API_BASE env var;
change the logic so get_provider_endpoint checks for an environment override
(key f"{provider.upper()}_API_BASE" or specifically MINIMAX_API_BASE for
provider "minimax") before falling back to PROVIDER_URL_MAP, or remove the
"minimax" hardcoded entry and instead have PROVIDER_URL_MAP supply only defaults
used when the env var is absent; update get_provider_endpoint (and any lookup
code that uses PROVIDER_URL_MAP) to prefer
os.environ.get(f"{provider.upper()}_API_BASE") and return that if present,
otherwise use PROVIDER_URL_MAP["minimax"].
| response = await client.get( | ||
| "https://api.minimax.io/v1/models", | ||
| headers={"Authorization": f"Bearer {api_key}"}, | ||
| ) |
There was a problem hiding this comment.
Model discovery still ignores MINIMAX_API_BASE.
The provider advertises an overridable base URL, but Lines 77-80 always call the public https://api.minimax.io/v1/models endpoint. With a custom gateway/base URL, model discovery uses a different host than completions and sends the bearer token to that default endpoint.
Suggested fix
+import os
import httpx
import logging
from typing import List, Dict, Any, AsyncGenerator, Union
import litellm
@@
try:
+ api_base = os.getenv("MINIMAX_API_BASE", "https://api.minimax.io/v1").rstrip("/")
response = await client.get(
- "https://api.minimax.io/v1/models",
+ f"{api_base}/models",
headers={"Authorization": f"Bearer {api_key}"},
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| response = await client.get( | |
| "https://api.minimax.io/v1/models", | |
| headers={"Authorization": f"Bearer {api_key}"}, | |
| ) | |
| import os | |
| import httpx | |
| import logging | |
| from typing import List, Dict, Any, AsyncGenerator, Union | |
| import litellm | |
| try: | |
| api_base = os.getenv("MINIMAX_API_BASE", "https://api.minimax.io/v1").rstrip("/") | |
| response = await client.get( | |
| f"{api_base}/models", | |
| headers={"Authorization": f"Bearer {api_key}"}, | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/rotator_library/providers/minimax_provider.py` around lines 77 - 80, The
model discovery call currently hardcodes "https://api.minimax.io/v1/models";
update the code that performs the GET (the call to client.get in
minimax_provider.py that fetches models) to build the URL from the provider's
overridable base (MINIMAX_API_BASE or the provider config variable used
elsewhere for completions) instead of the hardcoded host—e.g. derive api_base =
MINIMAX_API_BASE or the existing config value, normalize to strip a trailing
slash, then call f"{api_base}/v1/models" with the same Authorization header so
discovery uses the same gateway as completions and does not leak the bearer
token to the public endpoint.
| with patch("litellm.acompletion") as mock_acompletion: | ||
| mock_acompletion.return_value = MagicMock() | ||
| await provider.acompletion( | ||
| MagicMock(), | ||
| model="minimax/MiniMax-M2.7", | ||
| messages=[{"role": "user", "content": "hi"}], | ||
| temperature=0, | ||
| ) | ||
| called_kwargs = mock_acompletion.call_args.kwargs | ||
| assert called_kwargs["temperature"] > 0 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n tests/test_minimax_provider.py | head -420 | tail -50Repository: Mirrowel/LLM-API-Key-Proxy
Length of output: 2328
🏁 Script executed:
rg "class.*Test" tests/test_minimax_provider.py -A 5Repository: Mirrowel/LLM-API-Key-Proxy
Length of output: 1373
🏁 Script executed:
rg "def.*setUp|self.api_key" tests/test_minimax_provider.py -B 2 -A 2Repository: Mirrowel/LLM-API-Key-Proxy
Length of output: 678
🏁 Script executed:
wc -l tests/test_minimax_provider.pyRepository: Mirrowel/LLM-API-Key-Proxy
Length of output: 104
This integration test is still a unit test.
TestMiniMaxIntegration.test_live_temperature_clamping patches litellm.acompletion instead of making a real provider call, meaning it never uses self.api_key and never validates the end-to-end path through MiniMax. The test only confirms that the internal clamping logic passes temperature > 0 to the mocked call. Since TestTemperatureClamping already covers the unit-level clamping behavior, either move this test to that unit class or convert it to a real live test like test_live_chat_completion.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_minimax_provider.py` around lines 404 - 413, The test
TestMiniMaxIntegration.test_live_temperature_clamping currently patches
litellm.acompletion so it never exercises provider.acompletion end-to-end;
either convert it into a true integration test by removing the patch of
litellm.acompletion, invoking provider.acompletion the same way
test_live_chat_completion does (ensure a real MiniMax API key is available via
self.api_key or test fixture) and assert the live request/response respects
clamped temperature, or move the test into the unit test class
TestTemperatureClamping and keep the existing patch but rename it to reflect
unit-level behavior; update references to litellm.acompletion and
provider.acompletion accordingly so the test location and setup match its
intent.
Summary
MiniMaxProviderto the rotator library so the proxy auto-discovers and rotates MiniMax API keys with zero extra configuration.temperature=0; the provider silently raises it to0.01before forwarding to LiteLLM.minimax: https://api.minimax.io/v1inPROVIDER_URL_MAPfor the quota viewer and other tooling..env.examplewith API key, optional base URL override, and a note about the temperature constraint.MINIMAX_API_KEYis set).Models supported
MiniMax-M2.7MiniMax-M2.7-highspeedMiniMax-M2.5MiniMax-M2.5-highspeedModels are fetched live from the MiniMax
/v1/modelsendpoint; the table above is the static fallback used when the endpoint is unreachable.Configuration
Test plan
pytest -m 'not integration' tests/test_minimax_provider.py– all 18 should pass with no API key.MINIMAX_API_KEY=$KEY pytest -m integration tests/test_minimax_provider.py– validates live model discovery and chat completion.