Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
83 commits
Select commit Hold shift + click to select a range
3590c43
docs: LLM profiles design + example profile
openhands-agent Oct 18, 2025
9b1e3db
llm: add profile_id field to LLM (profile filename identifier)\n\nCo-…
openhands-agent Oct 18, 2025
21efefe
feat(llm): add ProfileManager and eagerly register profiles at conver…
openhands-agent Oct 18, 2025
46ca1b7
chore: stop tracking local runtime and worktree files; add to .gitignore
openhands-agent Oct 18, 2025
5efdaee
chore: only ignore bead databases
enyst Oct 18, 2025
9cbf67f
test: cover llm profile manager
enyst Oct 18, 2025
dfab517
Update .gitignore
enyst Oct 18, 2025
441eb25
Improve LLM profile manager persistence
enyst Oct 18, 2025
e7cd039
Add example for managing LLM profiles
enyst Oct 18, 2025
269610a
Document plan for profile references
enyst Oct 18, 2025
d0ab952
Integrate profile-aware persistence
enyst Oct 19, 2025
f74d050
Simplify profile registration logging
enyst Oct 19, 2025
df308fb
Normalize inline_mode naming
enyst Oct 19, 2025
4d293db
Simplify profile_id sync in ProfileManager
enyst Oct 19, 2025
7d1a525
Rename profile sync helper
enyst Oct 19, 2025
ec45ed5
LLMRegistry handles profile management
enyst Oct 19, 2025
1566df4
docs: clarify LLMRegistry profile guidance
enyst Oct 19, 2025
8f8b5b9
refactor: rename profile persistence helpers
enyst Oct 19, 2025
a3efa6e
refactor: split profile transform helpers
enyst Oct 19, 2025
17617aa
style: use f-strings in LLMRegistry logging
enyst Oct 19, 2025
9134aa1
Update openhands/sdk/llm/llm_registry.py
enyst Oct 19, 2025
36ab580
chore: stop tracking scripts/worktree.sh
enyst Oct 19, 2025
cea6a0d
Merge upstream main into agent-sdk-18-profile-manager
enyst Oct 21, 2025
12eec55
fix: remove runtime llm switching
enyst Oct 21, 2025
03b4600
style: use f-string for registry logging
enyst Oct 21, 2025
acf67e3
docs: expand LLM profile example
enyst Oct 21, 2025
218728e
Refine LLM profile persistence
enyst Oct 21, 2025
75e8ecd
Update LLM profile docs for usage_id semantics
enyst Oct 22, 2025
8511524
Merge remote-tracking branch 'upstream/main' into agent-sdk-18-profil…
enyst Oct 23, 2025
1f3adab
Merge branch 'main' into agent-sdk-18-profile-manager
enyst Oct 24, 2025
96ba8e9
Merge branch 'main' into agent-sdk-18-profile-manager
enyst Oct 25, 2025
142faee
fix LLM mutation for profiles to respect immutability; add docstring;…
enyst Oct 25, 2025
82138dd
refactor: keep LLM profile expansion at persistence layer
enyst Oct 25, 2025
b6511a9
Merge branch 'main' of github.com:All-Hands-AI/agent-sdk into agent-s…
enyst Oct 25, 2025
f5404b6
fix: restore LLM profile validation behavior
enyst Oct 26, 2025
85bc698
Merge branch 'main' into agent-sdk-18-profile-manager
enyst Oct 26, 2025
ba4bd50
harden profile handling
enyst Oct 26, 2025
99a422c
Merge branch 'main' into agent-sdk-18-profile-manager
enyst Nov 6, 2025
5c52fa5
Merge branch 'main' into agent-sdk-18-profile-manager
enyst Nov 20, 2025
b69db09
Merge branch 'main' into agent-sdk-18-profile-manager
enyst Nov 28, 2025
5dc94c1
update to current state
enyst Nov 28, 2025
69d3a7d
remove deprecated from llm
enyst Nov 28, 2025
61f5b77
ruff
enyst Nov 28, 2025
2381da7
restore gitignore
enyst Nov 28, 2025
b2f80d3
Delete .openhands/microagents/vscode.md
enyst Nov 28, 2025
8a95dac
Merge branch 'main' into agent-sdk-18-profile-manager
enyst Dec 1, 2025
0aa1164
Merge branch 'main' into agent-sdk-18-profile-manager
enyst Dec 12, 2025
744f171
fix(llm): tolerate legacy profile fields
enyst Dec 12, 2025
24d59bd
fix(llm): keep profile loading strict
enyst Dec 12, 2025
a4d6cd4
fix(llm): reduce profile side effects
enyst Dec 12, 2025
075c9b2
test(utils): stabilize discriminated union suite
enyst Dec 12, 2025
ab3a265
single source of truth for persistence behavior
enyst Dec 13, 2025
82549cc
Merge branch 'main' of github.com:OpenHands/software-agent-sdk into a…
enyst Dec 13, 2025
f400d7d
Update openhands-sdk/openhands/sdk/persistence/__init__.py
enyst Dec 14, 2025
a112ddc
feat(llm): save API keys in LLM profiles by default and set 0600 perm…
enyst Dec 15, 2025
60bfbb2
Merge branch 'main' of github.com:OpenHands/software-agent-sdk into a…
enyst Dec 16, 2025
0d01065
Merge branch 'agent-sdk-18-profile-manager' of github.com:OpenHands/s…
enyst Dec 16, 2025
bc94774
Merge branch 'main' into agent-sdk-18-profile-manager
enyst Dec 16, 2025
ad07b05
Merge branch 'main' into agent-sdk-18-profile-manager
enyst Dec 17, 2025
2464633
Delete docs/llm_profiles.md
enyst Dec 18, 2025
db10002
Update openhands-sdk/openhands/sdk/llm/llm.py
enyst Dec 18, 2025
ce31e79
Merge branch 'main' into agent-sdk-18-profile-manager
enyst Dec 18, 2025
1fe3929
Merge branch 'main' into agent-sdk-18-profile-manager
enyst Dec 19, 2025
8625ff2
Merge remote-tracking branch 'upstream/main' into agent-sdk-18-profil…
enyst Dec 29, 2025
95d94c3
Merge remote-tracking branch 'upstream/main' into agent-sdk-18-profil…
enyst Dec 29, 2025
67ab2c0
ci: detect nested examples in docs check
enyst Dec 29, 2025
fab1d57
ci: fix nested examples regex
enyst Dec 29, 2025
926fb90
ci(docs): clarify example skip rationale
enyst Dec 30, 2025
5676592
Merge branch 'main' into agent-sdk-18-profile-manager
enyst Jan 1, 2026
b2ea371
Merge main into agent-sdk-18-profile-manager
enyst Jan 6, 2026
2aa320d
Merge branch 'main' into agent-sdk-18-profile-manager
enyst Jan 7, 2026
b5a01ad
fix(llm): reject unknown fields when loading profiles
enyst Jan 7, 2026
90257c5
Revert "fix(llm): reject unknown fields when loading profiles"
enyst Jan 7, 2026
7a83b34
refactor(persistence): default to LLM profiles, drop inline env toggle
enyst Jan 7, 2026
9530155
Update .gitignore
enyst Jan 7, 2026
69e259b
chore(examples): make llm profiles example last
enyst Jan 8, 2026
c6f5db7
Update examples/01_standalone_sdk/34_llm_profiles.py
enyst Jan 8, 2026
9ecab27
Merge branch 'main' into agent-sdk-18-profile-manager
xingyaoww Jan 8, 2026
cd3ab89
Merge branch 'main' into agent-sdk-18-profile-manager
xingyaoww Jan 8, 2026
9859f21
chore(examples): inline llm profiles script body
enyst Jan 8, 2026
23cb159
feat(llm): default profile persistence and drop inline key
enyst Jan 8, 2026
dcc83f5
Merge branch 'main' into agent-sdk-18-profile-manager
enyst Jan 8, 2026
6807c99
test: fix workflow model resolver + session api key env
enyst Jan 27, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 9 additions & 3 deletions .github/scripts/check_documented_examples.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,12 @@ def find_documented_examples(docs_path: Path) -> set[str]:
"""
documented_examples: set[str] = set()

# Pattern to match example file references with arbitrary nesting depth.
# Pattern to match example file references.
#
# The agent-sdk examples tree includes nested modules (e.g.
# examples/02_remote_agent_server/05_custom_tool/custom_tools/log_data.py),
# so we intentionally support *arbitrary* nesting depth under examples/.
#
# Matches: examples/<dir>/.../<file>.py
pattern = r"examples/(?:[-\w]+/)+[-\w]+\.py"

Expand Down Expand Up @@ -81,8 +86,9 @@ def find_agent_sdk_examples(agent_sdk_path: Path) -> set[str]:
if relative_path_str.startswith("examples/03_github_workflows/"):
continue

# Skip LLM-specific tools examples: these are intentionally not
# enforced by the docs check. See discussion in PR #1486.
# Skip LLM-specific tools examples: these depend on external
# model/provider availability and are intentionally excluded from
# docs example enforcement.
if relative_path_str.startswith("examples/04_llm_specific_tools/"):
continue

Expand Down
124 changes: 124 additions & 0 deletions examples/01_standalone_sdk/34_llm_profiles.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
"""Create and use an LLM profile with :class:`LLMRegistry`.

Run with::

uv run python examples/01_standalone_sdk/34_llm_profiles.py

Profiles are stored under ``~/.openhands/llm-profiles/<name>.json`` by default.
Set ``LLM_PROFILE_NAME`` to pick a profile.

Notes on credentials:
- New profiles include API keys by default when saved
- To omit secrets on disk, pass include_secrets=False to LLMRegistry.save_profile
"""

import json
import os
from pathlib import Path

from pydantic import SecretStr

from openhands.sdk import (
LLM,
Agent,
Conversation,
LLMRegistry,
Tool,
)
from openhands.tools.terminal import TerminalTool


PROFILE_NAME = os.getenv("LLM_PROFILE_NAME", "gpt-5-mini")


def ensure_profile_exists(registry: LLMRegistry, name: str) -> None:
"""Create a starter profile in the default directory when missing."""

if name in registry.list_profiles():
return

model = os.getenv("LLM_MODEL", "anthropic/claude-sonnet-4-5-20250929")
base_url = os.getenv("LLM_BASE_URL")
api_key = os.getenv("LLM_API_KEY")

profile_defaults = LLM(
usage_id="agent",
model=model,
base_url=base_url,
api_key=SecretStr(api_key) if api_key else None,
temperature=0.2,
max_output_tokens=4096,
)
path = registry.save_profile(name, profile_defaults)
print(f"Created profile '{name}' at {path}")


def load_profile(registry: LLMRegistry, name: str) -> LLM:
llm = registry.load_profile(name)
# If profile was saved without secrets, allow providing API key via env var
if llm.api_key is None:
api_key = os.getenv("LLM_API_KEY")
if api_key:
llm = llm.model_copy(update={"api_key": SecretStr(api_key)})
return llm


if __name__ == "__main__": # pragma: no cover
registry = LLMRegistry()
ensure_profile_exists(registry, PROFILE_NAME)

llm = load_profile(registry, PROFILE_NAME)

tools = [Tool(name=TerminalTool.name)]
agent = Agent(llm=llm, tools=tools)

workspace_dir = Path(os.getcwd())
summary_path = workspace_dir / "summary_readme.md"
if summary_path.exists():
summary_path.unlink()

persistence_root = workspace_dir / ".conversations_llm_profiles"
conversation = Conversation(
agent=agent,
workspace=str(workspace_dir),
persistence_dir=str(persistence_root),
visualizer=None,
)

conversation.send_message(
"Read README.md in this workspace, create a concise summary in "
"summary_readme.md (overwrite it if it exists), and respond with "
"SUMMARY_READY when the file is written."
)
conversation.run()

if summary_path.exists():
print(f"summary_readme.md written to {summary_path}")
else:
print("summary_readme.md not found after first run")

conversation.send_message(
"Thanks! Delete summary_readme.md from the workspace and respond with "
"SUMMARY_REMOVED once it is gone."
)
conversation.run()

if summary_path.exists():
print("summary_readme.md still present after deletion request")
else:
print("summary_readme.md removed")

persistence_dir = conversation.state.persistence_dir
if persistence_dir is None:
raise RuntimeError("Conversation did not persist base state to disk")

base_state_path = Path(persistence_dir) / "base_state.json"
state_payload = json.loads(base_state_path.read_text())
llm_entry = state_payload.get("agent", {}).get("llm", {})
profile_in_state = llm_entry.get("profile_id")
print(f"Profile recorded in base_state.json: {profile_in_state}")
if profile_in_state != PROFILE_NAME:
print(
"Warning: profile_id in base_state.json does not match the profile "
"used at runtime."
)
7 changes: 7 additions & 0 deletions examples/llm-profiles/gpt-5-mini.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
{
"model": "litellm_proxy/openai/gpt-5-mini",
"base_url": "https://llm-proxy.eval.all-hands.dev",
"temperature": 0.2,
"max_output_tokens": 4096,
"usage_id": "agent"
}
1 change: 1 addition & 0 deletions openhands-sdk/openhands/sdk/agent/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -414,6 +414,7 @@ def model_dump_succint(self, **kwargs):
"""Like model_dump, but excludes None fields by default."""
if "exclude_none" not in kwargs:
kwargs["exclude_none"] = True

dumped = super().model_dump(**kwargs)
# remove tool schema details for brevity
if "tools" in dumped and isinstance(dumped["tools"], dict):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,9 @@ def __init__(
'monologue', 'alternating_pattern'. Values are integers
representing the number of repetitions before triggering.
"""
# Initialize the registry early so profile references resolve during resume.
self.llm_registry = LLMRegistry()

super().__init__() # Initialize with span tracking
# Mark cleanup as initiated as early as possible to avoid races or partially
# initialized instances during interpreter shutdown.
Expand Down Expand Up @@ -134,6 +137,7 @@ def __init__(
else None,
max_iterations=max_iteration_per_run,
stuck_detection=stuck_detection,
llm_registry=self.llm_registry,
)

# Default callback: persist every event to state
Expand Down Expand Up @@ -209,7 +213,6 @@ def _default_callback(e):
self.agent.init_state(self._state, on_event=self._on_event)

# Register existing llms in agent
self.llm_registry = LLMRegistry()
self.llm_registry.subscribe(self._state.stats.register_llm)
for llm in list(self.agent.get_all_llms()):
self.llm_registry.add(llm)
Expand Down Expand Up @@ -254,6 +257,7 @@ def send_message(self, message: str | Message, sender: str | None = None) -> Non

Args:
message: Either a string (which will be converted to a user message)

or a Message object
sender: Optional identifier of the sender. Can be used to track
message origin in multi-agent scenarios. For example, when
Expand Down
72 changes: 56 additions & 16 deletions openhands-sdk/openhands/sdk/conversation/state.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
from collections.abc import Sequence
from enum import Enum
from pathlib import Path
from typing import Any, Self
from typing import TYPE_CHECKING, Any, Self

from pydantic import Field, PrivateAttr, model_validator

Expand All @@ -18,6 +18,12 @@
from openhands.sdk.event.base import Event
from openhands.sdk.io import FileStore, InMemoryFileStore, LocalFileStore
from openhands.sdk.logger import get_logger


if TYPE_CHECKING:
from openhands.sdk.llm.llm_registry import LLMRegistry


from openhands.sdk.security.analyzer import SecurityAnalyzerBase
from openhands.sdk.security.confirmation_policy import (
ConfirmationPolicyBase,
Expand Down Expand Up @@ -180,6 +186,7 @@ def create(
persistence_dir: str | None = None,
max_iterations: int = 500,
stuck_detection: bool = True,
llm_registry: "LLMRegistry | None" = None,
) -> "ConversationState":
"""Create a new conversation state or resume from persistence.

Expand All @@ -196,13 +203,19 @@ def create(
history), but all other configuration can be freely changed: LLM,
agent_context, condenser, system prompts, etc.

When conversation state is persisted with LLM profile references (instead
of inlined credentials), pass an ``llm_registry`` so profile IDs can be
expanded during restore.

Args:
id: Unique conversation identifier
agent: The Agent to use (tools must match persisted on restore)
workspace: Working directory for agent operations
persistence_dir: Directory for persisting state and events
max_iterations: Maximum iterations per run
stuck_detection: Whether to enable stuck detection
llm_registry: Optional registry used to expand profile references when
conversations persist profile IDs instead of inline credentials.

Returns:
ConversationState ready for use
Expand All @@ -222,32 +235,59 @@ def create(
except FileNotFoundError:
base_text = None

context: dict[str, object] = {}
registry = llm_registry
if registry is None:
from openhands.sdk.llm.llm_registry import LLMRegistry

registry = LLMRegistry()
context["llm_registry"] = registry

# Ensure that any runtime-provided LLM without an explicit profile is
# persisted as a stable "default" profile, so conversation state can
# safely store only a profile reference.
agent = agent.model_copy(
update={"llm": registry.ensure_default_profile(agent.llm)}
)

# ---- Resume path ----
if base_text:
state = cls.model_validate(json.loads(base_text))
base_payload = json.loads(base_text)

# Restore the conversation with the same id
if state.id != id:
persisted_id = ConversationID(base_payload.get("id"))
if persisted_id != id:
raise ValueError(
f"Conversation ID mismatch: provided {id}, "
f"but persisted state has {state.id}"
f"but persisted state has {persisted_id}"
)

persisted_agent_payload = base_payload.get("agent")
if persisted_agent_payload is None:
raise ValueError("Persisted conversation is missing agent state")

# Attach event log early so we can read history for tool verification
state._fs = file_store
state._events = EventLog(file_store, dir_path=EVENTS_DIR)
event_log = EventLog(file_store, dir_path=EVENTS_DIR)

# Verify compatibility (agent class + tools)
agent.verify(state.agent, events=state._events)
persisted_agent = AgentBase.model_validate(
persisted_agent_payload,
context={"llm_registry": registry},
)
agent.verify(persisted_agent, events=event_log)

# Commit runtime-provided values (may autosave)
state._autosave_enabled = True
state.agent = agent
state.workspace = workspace
state.max_iterations = max_iterations
# Use runtime-provided Agent directly (PR #1542 / issue #1451)
base_payload["agent"] = agent.model_dump(
mode="json",
exclude_none=True,
context={"expose_secrets": True},
)
base_payload["workspace"] = workspace.model_dump(mode="json")
base_payload["max_iterations"] = max_iterations

state = cls.model_validate(base_payload, context=context)
state._fs = file_store
state._events = event_log

# Note: stats are already deserialized from base_state.json above.
# Do NOT reset stats here - this would lose accumulated metrics.
state._autosave_enabled = True

logger.info(
f"Resumed conversation {state.id} from persistent storage.\n"
Expand Down
Loading
Loading