Skip to content

Conversation

@debu-sinha
Copy link

Fixes #1489

What changed

handle_gemini_json and handle_gemini_tools assumed the model is always bound at client creation time (the native Gemini SDK pattern via from_gemini()). When using from_litellm() with a LiteLLM Router, the model is passed at request time in kwargs, which immediately triggered a ConfigurationError.

The fix detects the LiteLLM path by checking if model is present in kwargs:

GEMINI_JSON (LiteLLM path):

  • Injects JSON schema instructions into the system message (same as native)
  • Sets response_format with type: json_object (OpenAI style, which LiteLLM converts)
  • Skips update_gemini_kwargs() since LiteLLM handles message format conversion internally

GEMINI_TOOLS (LiteLLM path):

  • Uses OpenAI-style tool definitions and tool_choice (instead of Gemini-specific gemini_schema and tool_config)
  • Skips update_gemini_kwargs() for the same reason

The native Gemini SDK path (no model in kwargs) is completely unchanged.

Changes

  • instructor/providers/gemini/utils.py: Branch handle_gemini_json and handle_gemini_tools based on whether model is in kwargs. Extract shared schema injection into _inject_json_schema_message helper.
  • tests/llm/test_gemini_litellm_compat.py: 13 unit tests covering both GEMINI_JSON and GEMINI_TOOLS through the LiteLLM path, including the exact reproduction from Gemini specific Modes do not work with LiteLLM #1489.

Testing

All 45 tests pass (13 new + 32 existing bedrock):

tests/llm/test_gemini_litellm_compat.py::test_gemini_json_litellm_path_injects_schema PASSED
tests/llm/test_gemini_litellm_compat.py::test_gemini_json_litellm_path_sets_response_format PASSED
tests/llm/test_gemini_litellm_compat.py::test_gemini_json_litellm_path_preserves_model PASSED
tests/llm/test_gemini_litellm_compat.py::test_gemini_json_litellm_path_keeps_openai_message_format PASSED
tests/llm/test_gemini_litellm_compat.py::test_gemini_json_litellm_path_no_response_model PASSED
tests/llm/test_gemini_litellm_compat.py::test_gemini_json_litellm_path_appends_to_existing_system PASSED
tests/llm/test_gemini_litellm_compat.py::test_gemini_json_native_path_no_model PASSED
tests/llm/test_gemini_litellm_compat.py::test_gemini_tools_litellm_path_uses_openai_schema PASSED
tests/llm/test_gemini_litellm_compat.py::test_gemini_tools_litellm_path_sets_tool_choice PASSED
tests/llm/test_gemini_litellm_compat.py::test_gemini_tools_litellm_path_preserves_model PASSED
tests/llm/test_gemini_litellm_compat.py::test_gemini_tools_litellm_path_keeps_openai_message_format PASSED
tests/llm/test_gemini_litellm_compat.py::test_gemini_tools_litellm_path_no_response_model PASSED
tests/llm/test_gemini_litellm_compat.py::test_gemini_json_litellm_router_reproduction PASSED

With this fix, the code from #1489 works as expected:

from litellm import Router
import instructor

router = Router(model_list=[...])
client = instructor.from_litellm(router.completion, mode=instructor.Mode.GEMINI_JSON)

# This now works instead of raising ConfigurationError
response = client.chat.completions.create(
    model="gemini-2.0-flash",
    response_model=PersonalInfo,
    messages=[...],
)

Fixes 567-labs#1489. When using from_litellm() with Mode.GEMINI_JSON or
Mode.GEMINI_TOOLS, the handler immediately raised ConfigurationError
because it required the model to be set at patch time (the native
Gemini SDK pattern), not at request time (the LiteLLM pattern).

The fix detects the LiteLLM path by checking if "model" is in
kwargs. When it is:
- GEMINI_JSON: injects JSON schema into system message and sets
  response_format (OpenAI style), skips Gemini message conversion
- GEMINI_TOOLS: uses OpenAI-style tool definitions and tool_choice,
  skips Gemini message conversion

The native Gemini SDK path (no model in kwargs) is completely
unchanged. LiteLLM handles its own message format conversion
internally, so we just need to set up the right parameters.

Signed-off-by: debu-sinha <debusinha2009@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Gemini specific Modes do not work with LiteLLM

1 participant