-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Description
🔴 Required Information
Describe the Bug:
ADK's LiteLlm wrapper is incompatible with LiteLLM v1.81.3+ when using Gemini 2.0+ models. The _to_litellm_response_format function always sends response_schema (OpenAPI format), but LiteLLM v1.81.3+ automatically converts this to responseJsonSchema (JSON Schema format) for Gemini 2.0+ models. This format mismatch causes structured output inconsistencies, preventing agent exit conditions from being satisfied.
Steps to Reproduce:
- Install
google-adk==1.23.0andlitellm>=1.81.3 - Configure LiteLLM proxy with Gemini 2.0+ model (e.g.,
gemini-3.0-flash-preview) - Create an agent that uses structured output with
response_schema - Run the agent - exit conditions are never satisfied due to structured output inconsistencies
Expected Behavior:
ADK should send the appropriate response format based on the model version:
- Gemini 1.5:
response_schema(OpenAPI format) - Gemini 2.0+:
json_schematype with JSON Schema format
Observed Behavior:
Structured output fields are not properly populated, causing agent exit conditions to never be satisfied.
Environment Details:
- ADK Library Version: 1.23.0
- Desktop OS: macOS
- Python Version: 3.11
Model Information:
- Are you using LiteLLM: Yes (via proxy)
- Which model is being used:
gemini-3.0-flash-preview
🟡 Optional Information
Regression:
This likely worked with LiteLLM < v1.81.3 (before PR #19314 was merged).
Logs:
N/A
Additional Context:
LiteLLM PR #19314 introduced automatic responseJsonSchema usage for Gemini 2.0+ models:
| Model | Parameter | Format |
|---|---|---|
| Gemini 1.5 | response_schema |
OpenAPI (uppercase types) |
| Gemini 2.0+ | response_json_schema |
JSON Schema (lowercase types) |
ADK's current implementation in lite_llm.py (lines 1473-1477, function _to_litellm_response_format):
if _is_litellm_gemini_model(model):
return {
"type": "json_object",
"response_schema": schema_dict, # Always uses old format
}Minimal Reproduction Code:
from google.adk.models.lite_llm import LiteLlm
from google.adk.agents import LlmAgent
from pydantic import BaseModel
class OutputSchema(BaseModel):
result: str
is_complete: bool
model = LiteLlm(
model="openai/gemini-3.0-flash-preview",
api_base="https://your-litellm-proxy.com",
)
agent = LlmAgent(
name="TestAgent",
model=model,
output_schema=OutputSchema,
instructions="Process the input and set is_complete=True when done.",
)
# Exit conditions are never satisfied due to structured output inconsistenciesHow often has this issue occurred?:
- Always (100%) - when using LiteLLM v1.81.3+ with Gemini 2.0+ models