-
Notifications
You must be signed in to change notification settings - Fork 36
Description
Problem
The integration does not work with Ollama cloud models (like gpt-oss:120b-cloud, gpt-oss:20b-cloud) because they return responses with Content-Type: text/plain instead of application/json.
Error Message
Ollama processing error: 200, message='Attempt to decode JSON with unexpected mimetype: text/plain; charset=utf-8'
Test Results
Local Model (Works ✅)
Command:
curl -i -X POST http://localhost:11434/api/chat \
-H "Content-Type: application/json" \
-d '{
"model": "llama3.1:8b-instruct-q4_K_M",
"messages": [{"role": "user", "content": "Say hello"}],
"stream": false
}'Response Headers:
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Sun, 21 Dec 2025 02:40:35 GMT
Content-Length: 349
Response Body (JSON):
{
"model": "llama3.1:8b-instruct-q4_K_M",
"created_at": "2025-12-21T02:40:35.549464Z",
"message": {
"role": "assistant",
"content": "Hello! How can I assist you today?"
},
"done": true,
"done_reason": "stop"
}Result: ✅ Works correctly - Content-Type is application/json
Cloud Model (Fails ❌)
Command:
curl -i -X POST http://localhost:11434/api/chat \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-oss:120b-cloud",
"messages": [{"role": "user", "content": "Say hello"}],
"stream": false
}'Response Headers:
HTTP/1.1 200 OK
Date: Sun, 21 Dec 2025 02:40:37 GMT
Content-Type: text/plain; charset=utf-8
Transfer-Encoding: chunked
Response Body (JSON - but Content-Type is wrong):
{
"model": "gpt-oss:120b-cloud",
"remote_model": "gpt-oss:120b",
"remote_host": "https://ollama.com:443",
"created_at": "2025-12-21T02:40:42.017212302Z",
"message": {
"role": "assistant",
"content": "Hello! 👋 How can I assist you today?",
"thinking": "User says \"Say hello\". Simple. Should respond with a greeting."
},
"done": true,
"done_reason": "stop"
}Result: ❌ Fails - Content-Type is text/plain but response body IS valid JSON
Affected Models
✅ Working (Local Models)
llama3.1:8b-instruct-q4_K_Mllama3.1:70b-instruct- Other local Ollama models
❌ Not Working (Cloud Models)
gpt-oss:120b-cloudgpt-oss:20b-cloud- All Ollama cloud models (models with
-cloudsuffix)
What Happens
- Local Ollama models work fine - they return
Content-Type: application/json - Cloud Ollama models return
Content-Type: text/plaineven though the response body is valid JSON - The code at line 888 in
coordinator.pyusesawait resp.json()which checks the Content-Type header - aiohttp throws an error when Content-Type is not
application/json
Expected Behavior
The integration should work with both local and cloud Ollama models. The response body from cloud models is valid JSON, so it should be parsed even if the Content-Type header is text/plain.
Code Location
The problem is in custom_components/ai_automation_suggester/coordinator.py at line 888:
async def _ollama(self, prompt: str) -> str | None:
# ... configuration code ...
async with self.session.post(endpoint, json=body, timeout=timeout) as resp:
if resp.status != 200:
self._last_error = f"Ollama error {resp.status}: {await resp.text()}"
return None
res = await resp.json() # <-- LINE 888: This fails when Content-Type is text/plainSuggested Fix
Change the code to handle both application/json and text/plain Content-Type headers:
Option 1: Try/Except with fallback
try:
res = await resp.json()
except aiohttp.ContentTypeError:
# Fallback: parse as JSON even if Content-Type is text/plain
text = await resp.text()
res = json.loads(text)Option 2: Always parse as text (simpler)
text = await resp.text()
res = json.loads(text)Additional Context
- This is a known limitation of Ollama cloud models - they always return
text/plainContent-Type even when the response is valid JSON - The response body structure is the same for both local and cloud models (both are valid JSON)
- Cloud models include additional fields like
remote_model,remote_host, and sometimesthinkingin the message - The integration should handle this case since the actual content is always JSON
Software Versions
- Home Assistant Version: [Please add your version]
- Integration Version: [Please add your version]
- Ollama Version: [Please add:
ollama --version] - Python Version: [If available]
Environment
- Ollama Server: Local (localhost:11434)
- Cloud Models: Accessed via Ollama cloud service (ollama.com)
- Network: Direct connection to Ollama API