The cherry-picked model_picker test installed its own discord mock at
module-import time via a local _ensure_discord_mock(), overwriting
sys.modules['discord'] with a mock that lacked attributes other
gateway tests needed (Intents.default(), File, app_commands.Choice).
On pytest-xdist workers that collected test_discord_model_picker.py
first, the shared mock in tests/gateway/conftest.py got clobbered and
downstream tests failed with AttributeError / TypeError against
missing mock attrs. Classic sys.modules cross-test pollution (see
xdist-cross-test-pollution skill).
Fix:
- Extend the canonical _ensure_discord_mock() in tests/gateway/conftest.py
to cover everything the model_picker test needs: real View/Select/
Button/SelectOption classes (not MagicMock sentinels), an Embed
class that preserves title/description/color kwargs for assertion,
and Color.greyple.
- Strip the duplicated mock-setup block from test_discord_model_picker.py
and rely on the shared mock that conftest installs at collection
time.
Regression check:
scripts/run_tests.sh tests/gateway/ tests/hermes_cli/ -k 'discord or model or copilot or provider' -o 'addopts='
1291 passed (was 1288 passed + 3 xdist-ordered failures before this commit).
Keep Discord Copilot model switching responsive and current by refreshing picker data from the live catalog when possible, correcting the curated fallback list, and clearing stale controls before the switch completes.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Keep auxiliary provider resolution aligned with the switch and persisted main-provider paths when models.dev returns github-copilot slugs.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Xiaomi's API (api.xiaomimimo.com) requires lowercase model IDs like
"mimo-v2.5-pro" but rejects mixed-case names like "MiMo-V2.5-Pro"
that users copy from marketing docs or the ProviderEntry description.
Add _LOWERCASE_MODEL_PROVIDERS set and apply .lower() to model names
for providers in this set (currently just xiaomi) after stripping the
provider prefix. This ensures any case variant in config.yaml is
normalized before hitting the API.
Other providers (minimax, zai, etc.) are NOT affected — their APIs
accept mixed case (e.g. MiniMax-M2.7).
- Load prompt_caching.cache_ttl in AIAgent (5m default, 1h opt-in)
- Document DEFAULT_CONFIG and developer guide example
- Add unit tests for default, 1h, and invalid TTL fallback
Made-with: Cursor
Auxiliary tasks (session_search, flush_memories, approvals, compression,
vision, etc.) that route to a named custom provider declared under
config.yaml 'providers:' with 'api_mode: anthropic_messages' were
silently building a plain OpenAI client and POSTing to
{base_url}/chat/completions, which returns 404 on Anthropic-compatible
gateways that only expose /v1/messages.
Two gaps caused this:
1. hermes_cli/runtime_provider.py::_get_named_custom_provider — the
providers-dict branch (new-style) returned only name/base_url/api_key/
model and dropped api_mode. The legacy custom_providers-list branch
already propagated it correctly. The dict branch now parses and
returns api_mode via _parse_api_mode() in both match paths.
2. agent/auxiliary_client.py::resolve_provider_client — the named
custom provider block at ~L1740 ignored custom_entry['api_mode']
and unconditionally built an OpenAI client (only wrapping for
Codex/Responses). It now mirrors _try_custom_endpoint()'s three-way
dispatch: anthropic_messages → AnthropicAuxiliaryClient (async wrapped
in AsyncAnthropicAuxiliaryClient), codex_responses → CodexAuxiliaryClient,
otherwise plain OpenAI. An explicit task-level api_mode override
still wins over the provider entry's declared api_mode.
Fixes#15033
Tests: tests/agent/test_auxiliary_named_custom_providers.py gains a
TestProvidersDictApiModeAnthropicMessages class covering
- providers-dict preserves valid api_mode
- invalid api_mode values are dropped
- missing api_mode leaves the entry unchanged (no regression)
- resolve_provider_client returns (Async)AnthropicAuxiliaryClient for
api_mode=anthropic_messages
- full chain via get_text_auxiliary_client / get_async_text_auxiliary_client
with an auxiliary.<task> override
- providers without api_mode still use the OpenAI-wire path
When context compression fires mid-session, run_agent's _compress_context
ends the current session, creates a new child session linked by
parent_session_id, and resets the SQLite flush cursor. New messages land
in the child; the parent row ends up with message_count = 0. A user who
runs 'hermes --resume <original_id>' sees a blank chat even though the
transcript exists — just under a descendant id.
PR #12920 already fixed the exit banner to print the live descendant id
at session end, but that didn't help users who resume by a session id
captured BEFORE the banner update (scripts, sessions list, old terminal
scrollback) or who type the parent id manually.
Fix: add SessionDB.resolve_resume_session_id() which walks the
parent→child chain forward and returns the first descendant with at
least one message row. Wire it into all three resume entry points:
- HermesCLI._preload_resumed_session() (early resume at run() time)
- HermesCLI._init_agent() (the classical resume path)
- /resume slash command
Semantics preserved when the chain has no descendants with messages,
when the requested session already has messages, or when the id is
unknown. A depth cap of 32 guards against malformed loops.
This does NOT concatenate the pre-compression parent transcript into
the child — the whole point of compression is to shrink that, so
replaying it would blow the cache budget we saved. We just jump to
the post-compression child. The summary already reflects what was
compressed away.
Tests: tests/hermes_state/test_resolve_resume_session_id.py covers
- the exact 6-session shape from the issue
- passthrough when session has messages / no descendants
- passthrough for nonexistent / empty / None input
- middle-of-chain redirects
- fork resolution (prefers most-recent child)
Closes#15000
Pin the behaviour added in the preceding commit — `_get_proxy_for_base_url()`
must return None for hosts covered by NO_PROXY and the HTTPS_PROXY otherwise,
and the full `_create_openai_client()` path must NOT mount HTTPProxy for a
NO_PROXY host.
Refs: #14966
Follow-up to the allowed_channels wildcard fix in the preceding commit.
The same '*' literal trap affected two other Discord channel config lists:
- DISCORD_IGNORED_CHANNELS: '*' was stored as the literal string in the
ignored set, and the intersection check never matched real channel IDs,
so '*' was a no-op instead of silencing every channel.
- DISCORD_FREE_RESPONSE_CHANNELS: same shape — '*' never matched, so
the bot still required a mention everywhere.
Add a '*' short-circuit to both checks, matching the allowed_channels
semantics. Extend tests/gateway/test_discord_allowed_channels.py with
regression coverage for all three lists.
Refs: #14920
allowed_channels: "*" in config (or DISCORD_ALLOWED_CHANNELS="*" env var)
is meant to allow all channels, but the check was comparing numeric channel
IDs against the literal string set {"*"} via set intersection — always empty,
so every message was silently dropped.
Add a "*" short-circuit before the set intersection, consistent with every
other platform's allowlist handling (Signal, Slack, Telegram all do this).
Fixes#14920
Introduces a new plugin hook `pre_gateway_dispatch` fired once per
incoming MessageEvent in `_handle_message`, after the internal-event
guard but before the auth / pairing chain. Plugins may return a dict
to influence flow:
{"action": "skip", "reason": "..."} -> drop (no reply)
{"action": "rewrite", "text": "..."} -> replace event.text
{"action": "allow"} / None -> normal dispatch
Motivation: gateway-level message-flow patterns that don't fit cleanly
into any single adapter — e.g. listen-only group-chat windows (buffer
ambient messages, collapse on @mention), or human-handover silent
ingest (record messages while an owner handles the chat manually).
Today these require forking core; with this hook they can live in a
single profile-agnostic plugin.
Hook runs BEFORE auth so plugins can handle unauthorized senders
(e.g. customer-service handover ingest) without triggering the
pairing-code flow. Exceptions in plugin callbacks are caught and
logged; the first non-None action dict wins, remaining results are
ignored.
Includes:
- `VALID_HOOKS` entry + inline doc in `hermes_cli/plugins.py`
- Invocation block in `gateway/run.py::_handle_message`
- 5 new tests in `tests/gateway/test_pre_gateway_dispatch.py`
(skip, rewrite, allow, exception safety, internal-event bypass)
- 2 additional tests in `tests/hermes_cli/test_plugins.py`
- Table entry in `website/docs/user-guide/features/plugins.md`
Made-with: Cursor
- hermes_cli/auth.py: add _default_verify() with macOS Homebrew certifi
fallback (mirrors weixin 3a0ec1d93). Extend env var chain to include
REQUESTS_CA_BUNDLE so one env var works across httpx + requests paths.
- agent/model_metadata.py: add _resolve_requests_verify() reading
HERMES_CA_BUNDLE / REQUESTS_CA_BUNDLE / SSL_CERT_FILE in priority
order. Apply explicit verify= to all 6 requests.get callsites.
- Tests: 18 new unit tests + autouse platform pin on existing
TestResolveVerifyFallback to keep its "returns True" assertions
platform-independent.
Empirically verified against self-signed HTTPS server: requests honors
REQUESTS_CA_BUNDLE only; httpx honors SSL_CERT_FILE only. Hermes now
honors all three everywhere.
Triggered by Discord reports — Nous OAuth SSL failure on macOS
Homebrew Python; custom provider self-signed cert ignored despite
REQUESTS_CA_BUNDLE set in env.
Manual /compress crashed with 'LCMEngine' object has no attribute
'_align_boundary_forward' when any context-engine plugin was active.
The gateway handler reached into _align_boundary_forward and
_find_tail_cut_by_tokens on tmp_agent.context_compressor, but those
are ContextCompressor-specific — not part of the generic ContextEngine
ABC — so every plugin engine (LCM, etc.) raised AttributeError.
- Add optional has_content_to_compress(messages) to ContextEngine ABC
with a safe default of True (always attempt).
- Override it in the built-in ContextCompressor using the existing
private helpers — preserves exact prior behavior for 'compressor'.
- Rewrite gateway /compress preflight to call the ABC method, deleting
the private-helper reach-in.
- Add focus_topic to the ABC compress() signature. Make _compress_context
retry without focus_topic on TypeError so older strict-sig plugins
don't crash on manual /compress <focus>.
- Regression test with a fake ContextEngine subclass that only
implements the ABC (mirrors LCM's surface).
Reported by @selfhostedsoul (Discord, Apr 22).
faster-whisper's device="auto" picks CUDA when ctranslate2's wheel
ships CUDA shared libs, even on hosts without the NVIDIA runtime
(libcublas.so.12 / libcudnn*). On those hosts the model often loads
fine but transcribe() fails at first dlopen, and the broken model
stays cached in the module-global — every subsequent voice message
in the gateway process fails identically until restart.
- Add _load_local_whisper_model() wrapper: try auto, catch missing-lib
errors, retry on device=cpu compute_type=int8.
- Wrap transcribe() with the same fallback: evict cached model, reload
on CPU, retry once. Required because the dlopen failure only surfaces
at first kernel launch, not at model construction.
- Narrow marker list (libcublas, libcudnn, libcudart, 'cannot be loaded',
'no kernel image is available', 'no CUDA-capable device', driver
mismatch). Deliberately excludes 'CUDA out of memory' and similar —
those are real runtime failures that should surface, not be silently
retried on CPU.
- Tests for load-time fallback, runtime fallback (with cached-model
eviction verified), and the OOM non-fallback path.
Reported via Telegram voice-message dumps on WSL2 hosts where libcublas
isn't installed by default.
Local llama.cpp servers (e.g. ggml-org/llama.cpp:full-cuda) fail the entire
request with HTTP 400 'Unable to generate parser for this template. ...
Unrecognized schema: "object"' when any tool schema contains shapes its
json-schema-to-grammar converter can't handle:
* 'type': 'object' without 'properties'
* bare string schema values ('additionalProperties: "object"')
* 'type': ['X', 'null'] arrays (nullable form)
Cloud providers accept these silently, so they ship from external MCP
servers (Atlassian, GCloud, Datadog) and from a couple of our own tools.
Changes
- tools/schema_sanitizer.py: walks the finalized tool list right before it
leaves get_tool_definitions() and repairs the hostile shapes in a deep
copy. No-op on well-formed schemas. Recurses into properties, items,
additionalProperties, anyOf/oneOf/allOf, and $defs.
- model_tools.get_tool_definitions(): invoke the sanitizer as the last
step so all paths (built-in, MCP, plugin, dynamically-rebuilt) get
covered uniformly.
- tools/browser_cdp_tool.py, tools/mcp_tool.py: fix our own bare-object
schemas so sanitization isn't load-bearing for in-repo tools.
- tui_gateway/server.py: _load_enabled_toolsets() was passing
include_default_mcp_servers=False at runtime. That's the config-editing
variant (see PR #3252) — it silently drops every default MCP server
from the TUI's enabled_toolsets, which is why the TUI didn't hit the
llama.cpp crash (no MCP tools sent at all). Switch to True so TUI
matches CLI behavior.
Tests
tests/tools/test_schema_sanitizer.py (17 tests) covers the individual
failure modes, well-formed pass-through, deep-copy isolation, and
required-field pruning.
E2E: loaded the default 'hermes-cli' toolset with MCP discovery and
confirmed all 27 resolved tool schemas pass a llama.cpp-compatibility
walk (no 'object' node missing 'properties', no bare-string schema
values).
Wrap the existing version label in the welcome-banner panel title
('Hermes Agent v… · upstream … · local …') with an OSC-8 terminal
hyperlink pointing at the latest git tag's GitHub release page
(https://github.com/NousResearch/hermes-agent/releases/tag/<tag>).
Clickable in modern terminals (iTerm2, WezTerm, Windows Terminal,
GNOME Terminal, Kitty, etc.); degrades to plain text on terminals
without OSC-8 support. No new line added to the banner.
New get_latest_release_tag() helper runs 'git describe --tags
--abbrev=0' in the Hermes checkout (3s timeout, per-process cache,
silent fallback for non-git/pip installs and forks without tags).
OpenRouter returns a 404 with the specific message
'No endpoints available matching your guardrail restrictions and data
policy. Configure: https://openrouter.ai/settings/privacy'
when a user's account-level privacy setting excludes the only endpoint
serving a model (e.g. DeepSeek V4 Pro, which today is hosted only by
DeepSeek's own endpoint that may log inputs).
Before this change we classified it as model_not_found, which was
misleading (the model exists) and triggered provider fallback (useless —
the same account setting applies to every OpenRouter call).
Now it classifies as a new FailoverReason.provider_policy_blocked with
retryable=False, should_fallback=False. The error body already contains
the fix URL, so the user still gets actionable guidance.
On ChatGPT Codex OAuth every gpt-5.x slug actually caps at 272,000 tokens,
but Hermes was resolving gpt-5.5 / gpt-5.4 to 1,050,000 (from models.dev)
because openai-codex aliases to the openai entry there. At 1.05M the
compressor never fires and requests hard-fail with 'context window
exceeded' around the real 272k boundary.
Verified live against chatgpt.com/backend-api/codex/models:
gpt-5.5, gpt-5.4, gpt-5.4-mini, gpt-5.3-codex, gpt-5.2-codex,
gpt-5.2, gpt-5.1-codex-max → context_window = 272000
Changes:
- agent/model_metadata.py:
* _fetch_codex_oauth_context_lengths() — probe the Codex /models
endpoint with the OAuth bearer token and read context_window per
slug (1h in-memory TTL).
* _resolve_codex_oauth_context_length() — prefer the live probe,
fall back to hardcoded _CODEX_OAUTH_CONTEXT_FALLBACK (all 272k).
* Wire into get_model_context_length() when provider=='openai-codex',
running BEFORE the models.dev lookup (which returns 1.05M). Result
persists via save_context_length() so subsequent lookups skip the
probe entirely.
* Fixed the now-wrong comment on the DEFAULT_CONTEXT_LENGTHS gpt-5.5
entry (400k was never right for Codex; it's the catch-all for
providers we can't probe live).
Tests (4 new in TestCodexOAuthContextLength):
- fallback table used when no token is available (no models.dev leakage)
- live probe overrides the fallback
- probe failure (non-200) falls back to hardcoded 272k
- non-codex providers (openrouter, direct openai) unaffected
Non-codex context resolution is unchanged — the Codex branch only fires
when provider=='openai-codex'.
* docs: browser CDP supervisor design (for upcoming PR)
Design doc ahead of implementation — dialog + iframe detection/interaction
via a persistent CDP supervisor. Covers backend capability matrix (verified
live 2026-04-23), architecture, lifecycle, policy, agent surface, PR split,
non-goals, and test plan.
Supersedes #12550.
No code changes in this commit.
* feat(browser): add persistent CDP supervisor for dialog + frame detection
Single persistent CDP WebSocket per Hermes task_id that subscribes to
Page/Runtime/Target events and maintains thread-safe state for pending
dialogs, frame tree, and console errors.
Supervisor lives in its own daemon thread running an asyncio loop;
external callers use sync API (snapshot(), respond_to_dialog()) that
bridges onto the loop.
Auto-attaches to OOPIF child targets via Target.setAutoAttach{flatten:true}
and enables Page+Runtime on each so iframe-origin dialogs surface through
the same supervisor.
Dialog policies: must_respond (default, 300s safety timeout),
auto_dismiss, auto_accept.
Frame tree capped at 30 entries + OOPIF depth 2 to keep snapshot
payloads bounded on ad-heavy pages.
E2E verified against real Chrome via smoke test — detects + responds
to main-frame alerts, iframe-contentWindow alerts, preserves frame
tree, graceful no-dialog error path, clean shutdown.
No agent-facing tool wiring in this commit (comes next).
* feat(browser): add browser_dialog tool wired to CDP supervisor
Agent-facing response-only tool. Schema:
action: 'accept' | 'dismiss' (required)
prompt_text: response for prompt() dialogs (optional)
dialog_id: disambiguate when multiple dialogs queued (optional)
Handler:
SUPERVISOR_REGISTRY.get(task_id).respond_to_dialog(...)
check_fn shares _browser_cdp_check with browser_cdp so both surface and
hide together. When no supervisor is attached (Camofox, default
Playwright, or no browser session started yet), tool is hidden; if
somehow invoked it returns a clear error pointing the agent to
browser_navigate / /browser connect.
Registered in _HERMES_CORE_TOOLS and the browser / hermes-acp /
hermes-api-server toolsets alongside browser_cdp.
* feat(browser): wire CDP supervisor into session lifecycle + browser_snapshot
Supervisor lifecycle:
* _get_session_info lazy-starts the supervisor after a session row is
materialized — covers every backend code path (Browserbase, cdp_url
override, /browser connect, future providers) with one hook.
* cleanup_browser(task_id) stops the supervisor for that task first
(before the backend tears down CDP).
* cleanup_all_browsers() calls SUPERVISOR_REGISTRY.stop_all().
* /browser connect eagerly starts the supervisor for task 'default'
so the first snapshot already shows pending_dialogs.
* /browser disconnect stops the supervisor.
CDP URL resolution for the supervisor:
1. BROWSER_CDP_URL / browser.cdp_url override.
2. Fallback: session_info['cdp_url'] from cloud providers (Browserbase).
browser_snapshot merges supervisor state (pending_dialogs + frame_tree)
into its JSON output when a supervisor is active — the agent reads
pending_dialogs from the snapshot it already requests, then calls
browser_dialog to respond. No extra tool surface.
Config defaults:
* browser.dialog_policy: 'must_respond' (new)
* browser.dialog_timeout_s: 300 (new)
No version bump — new keys deep-merge into existing browser section.
Deadlock fix in supervisor event dispatch:
* _on_dialog_opening and _on_target_attached used to await CDP calls
while the reader was still processing an event — but only the reader
can set the response Future, so the call timed out.
* Both now fire asyncio.create_task(...) so the reader stays pumping.
* auto_dismiss/auto_accept now actually close the dialog immediately.
Tests (tests/tools/test_browser_supervisor.py, 11 tests, real Chrome):
* supervisor start/snapshot
* main-frame alert detection + dismiss
* iframe.contentWindow alert
* prompt() with prompt_text reply
* respond with no pending dialog -> clean error
* auto_dismiss clears on event
* registry idempotency
* registry stop -> snapshot reports inactive
* browser_dialog tool no-supervisor error
* browser_dialog invalid action
* browser_dialog end-to-end via tool handler
xdist-safe: chrome_cdp fixture uses a per-worker port.
Skipped when google-chrome/chromium isn't installed.
* docs(browser): document browser_dialog tool + CDP supervisor
- user-guide/features/browser.md: new browser_dialog section with
workflow, availability gate, and dialog_policy table
- reference/tools-reference.md: row for browser_dialog, tool count
bumped 53 -> 54, browser tools count 11 -> 12
- reference/toolsets-reference.md: browser_dialog added to browser
toolset row with note on pending_dialogs / frame_tree snapshot fields
Full design doc lives at
developer-guide/browser-supervisor.md (committed earlier).
* fix(browser): reconnect loop + recent_dialogs for Browserbase visibility
Found via Browserbase E2E test that revealed two production-critical issues:
1. **Supervisor WebSocket drops when other clients disconnect.** Browserbase's
CDP proxy tears down our long-lived WebSocket whenever a short-lived
client (e.g. agent-browser CLI's per-command CDP connection) disconnects.
Fixed with a reconnecting _run loop that re-attaches with exponential
backoff on drops. _page_session_id and _child_sessions are reset on each
reconnect; pending_dialogs and frames are preserved across reconnects.
2. **Browserbase auto-dismisses dialogs server-side within ~10ms.** Their
Playwright-based CDP proxy dismisses alert/confirm/prompt before our
Page.handleJavaScriptDialog call can respond. So pending_dialogs is
empty by the time the agent reads a snapshot on Browserbase.
Added a recent_dialogs ring buffer (capacity 20) that retains a
DialogRecord for every dialog that opened, with a closed_by tag:
* 'agent' — agent called browser_dialog
* 'auto_policy' — local auto_dismiss/auto_accept fired
* 'watchdog' — must_respond timeout auto-dismissed (300s default)
* 'remote' — browser/backend closed it on us (Browserbase)
Agents on Browserbase now see the dialog history with closed_by='remote'
so they at least know a dialog fired, even though they couldn't respond.
3. **Page.javascriptDialogClosed matching bug.** The event doesn't include a
'message' field (CDP spec has only 'result' and 'userInput') but our
_on_dialog_closed was matching on message. Fixed to match by session_id
+ oldest-first, with a safety assumption that only one dialog is in
flight per session (the JS thread is blocked while a dialog is up).
Docs + tests updated:
* browser.md: new availability matrix showing the three backends and
which mode (pending / recent / response) each supports
* developer-guide/browser-supervisor.md: three-field snapshot schema
with closed_by semantics
* test_browser_supervisor.py: +test_recent_dialogs_ring_buffer (12/12
passing against real Chrome)
E2E verified both backends:
* Local Chrome via /browser connect: detect + respond full workflow
(smoke_supervisor.py all 7 scenarios pass)
* Browserbase: detect via recent_dialogs with closed_by='remote'
(smoke_supervisor_browserbase_v2.py passes)
Camofox remains out of scope (REST-only, no CDP) — tracked for
upstream PR 3.
* feat(browser): XHR bridge for dialog response on Browserbase (FIXED)
Browserbase's CDP proxy auto-dismisses native JS dialogs within ~10ms, so
Page.handleJavaScriptDialog calls lose the race. Solution: bypass native
dialogs entirely.
The supervisor now injects Page.addScriptToEvaluateOnNewDocument with a
JavaScript override for window.alert/confirm/prompt. Those overrides
perform a synchronous XMLHttpRequest to a magic host
('hermes-dialog-bridge.invalid'). We intercept those XHRs via Fetch.enable
with a requestStage=Request pattern.
Flow when a page calls alert('hi'):
1. window.alert override intercepts, builds XHR GET to
http://hermes-dialog-bridge.invalid/?kind=alert&message=hi
2. Sync XHR blocks the page's JS thread (mirrors real dialog semantics)
3. Fetch.requestPaused fires on our WebSocket; supervisor surfaces
it as a pending dialog with bridge_request_id set
4. Agent reads pending_dialogs from browser_snapshot, calls browser_dialog
5. Supervisor calls Fetch.fulfillRequest with JSON body:
{accept: true|false, prompt_text: '...', dialog_id: 'd-N'}
6. The injected script parses the body, returns the appropriate value
from the override (undefined for alert, bool for confirm, string|null
for prompt)
This works identically on Browserbase AND local Chrome — no native dialog
ever fires, so Browserbase's auto-dismiss has nothing to race. Dialog
policies (must_respond / auto_dismiss / auto_accept) all still work.
Bridge is installed on every attached session (main page + OOPIF child
sessions) so iframe dialogs are captured too.
Native-dialog path kept as a fallback for backends that don't auto-dismiss
(so a page that somehow bypasses our override — e.g. iframes that load
after Fetch.enable but before the init-script runs — still gets observed
via Page.javascriptDialogOpening).
E2E VERIFIED:
* Local Chrome: 13/13 pytest tests green (12 original + new
test_bridge_captures_prompt_and_returns_reply_text that asserts
window.__ret === 'AGENT-SUPPLIED-REPLY' after agent responds)
* Browserbase: smoke_bb_bridge_v2.py runs 4/4 PASS:
- alert('BB-ALERT-MSG') dismiss → page.alert_ret = undefined ✓
- prompt('BB-PROMPT-MSG', 'default-xyz') accept with 'AGENT-REPLY'
→ page.prompt_ret === 'AGENT-REPLY' ✓
- confirm('BB-CONFIRM-MSG') accept → page.confirm_ret === true ✓
- confirm('BB-CONFIRM-MSG') dismiss → page.confirm_ret === false ✓
Docs updated in browser.md and developer-guide/browser-supervisor.md —
availability matrix now shows Browserbase at full parity with local
Chrome for both detection and response.
* feat(browser): cross-origin iframe interaction via browser_cdp(frame_id=...)
Adds iframe interaction to the CDP supervisor PR (was queued as PR 2).
Design: browser_cdp gets an optional frame_id parameter. When set, the
tool looks up the frame in the supervisor's frame_tree, grabs its child
cdp_session_id (OOPIF session), and dispatches the CDP call through the
supervisor's already-connected WebSocket via run_coroutine_threadsafe.
Why not stateless: on Browserbase, each fresh browser_cdp WebSocket
must re-negotiate against a signed connectUrl. The session info carries
a specific URL that can expire while the supervisor's long-lived
connection stays valid. Routing via the supervisor sidesteps this.
Agent workflow:
1. browser_snapshot → frame_tree.children[] shows OOPIFs with is_oopif=true
2. browser_cdp(method='Runtime.evaluate', frame_id=<OOPIF frame_id>,
params={'expression': 'document.title', 'returnByValue': True})
3. Supervisor dispatches the call on the OOPIF's child session
Supervisor state fixes needed along the way:
* _on_frame_detached now skips reason='swap' (frame migrating processes)
* _on_frame_detached also skips when the frame is an OOPIF with a live
child session — Browserbase fires spurious remove events when a
same-origin iframe gets promoted to OOPIF
* _on_target_detached clears cdp_session_id but KEEPS the frame record
so the agent still sees the OOPIF in frame_tree during transient
session flaps
E2E VERIFIED on Browserbase (smoke_bb_iframe_agent_path.py):
browser_cdp(method='Runtime.evaluate',
params={'expression': 'document.title', 'returnByValue': True},
frame_id=<OOPIF>)
→ {'success': True, 'result': {'value': 'Example Domain'}}
The iframe is <iframe src='https://example.com/'> inside a top-level
data: URL page on a real Browserbase session. The agent Runtime.evaluates
INSIDE the cross-origin iframe and gets example.com's title back.
Tests (tests/tools/test_browser_supervisor.py — 16 pass total):
* test_browser_cdp_frame_id_routes_via_supervisor — injects fake OOPIF,
verifies routing via supervisor, Runtime.evaluate returns 1+1=2
* test_browser_cdp_frame_id_missing_supervisor — clean error when no
supervisor attached
* test_browser_cdp_frame_id_not_in_frame_tree — clean error on bad
frame_id
Docs (browser.md and developer-guide/browser-supervisor.md) updated with
the iframe workflow, availability matrix now shows OOPIF eval as shipped
for local Chrome + Browserbase.
* test(browser): real-OOPIF E2E verified manually + chrome_cdp uses --site-per-process
When asked 'did you test the iframe stuff' I had only done a mocked
pytest (fake injected OOPIF) plus a Browserbase E2E. Closed the
local-Chrome real-OOPIF gap by writing /tmp/dialog-iframe-test/
smoke_local_oopif.py:
* 2 http servers on different hostnames (localhost:18905 + 127.0.0.1:18906)
* Chrome with --site-per-process so the cross-origin iframe becomes a
real OOPIF in its own process
* Navigate, find OOPIF in supervisor.frame_tree, call
browser_cdp(method='Runtime.evaluate', frame_id=<OOPIF>) which routes
through the supervisor's child session
* Asserts iframe document.title === 'INNER-FRAME-XYZ' (from the
inner page, retrieved via OOPIF eval)
PASSED on 2026-04-23.
Tried to embed this as a pytest but hit an asyncio version quirk between
venv (3.11) and the system python (3.13) — Page.navigate hangs in the
pytest harness but works in standalone. Left a self-documenting skip
test that points to the smoke script + describes the verification.
chrome_cdp fixture now passes --site-per-process so future iframe tests
can rely on OOPIF behavior.
Result: 16 pass + 1 documented-skip = 17 tests in
tests/tools/test_browser_supervisor.py.
* docs(browser): add dialog_policy + dialog_timeout_s to configuration.md, fix tool count
Pre-merge docs audit revealed two gaps:
1. user-guide/configuration.md browser config example was missing the
two new dialog_* knobs. Added with a short table explaining
must_respond / auto_dismiss / auto_accept semantics and a link to
the feature page for the full workflow.
2. reference/tools-reference.md header said '54 built-in tools' — real
count on main is 54, this branch adds browser_dialog so it's 55.
Fixed the header. (browser count was already correctly bumped
11 -> 12 in the earlier docs commit.)
No code changes.
* feat(config): make tool output truncation limits configurable
Port from anomalyco/opencode#23770: expose a new `tool_output` config
section so users can tune the hardcoded truncation caps that apply to
terminal output and read_file pagination.
Three knobs under `tool_output`:
- max_bytes (default 50_000) — terminal stdout/stderr cap
- max_lines (default 2000) — read_file pagination cap
- max_line_length (default 2000) — per-line cap in line-numbered view
All three keep their existing hardcoded values as defaults, so behaviour
is unchanged when the section is absent. Power users on big-context
models can raise them; small-context local models can lower them.
Implementation:
- New `tools/tool_output_limits.py` reads the section with defensive
fallback (missing/invalid values → defaults, never raises).
- `tools/terminal_tool.py` MAX_OUTPUT_CHARS now comes from
get_max_bytes().
- `tools/file_operations.py` normalize_read_pagination() and
_add_line_numbers() now pull the limits at call time.
- `hermes_cli/config.py` DEFAULT_CONFIG gains the `tool_output` section
so `hermes setup` writes defaults into fresh configs.
- Docs page `user-guide/configuration.md` gains a "Tool Output
Truncation Limits" section with large-context and small-context
example configs.
Tests (18 new in tests/tools/test_tool_output_limits.py):
- Default resolution with missing / malformed / non-dict config.
- Full and partial user overrides.
- Coercion of bad values (None, negative, wrong type, str int).
- Shortcut accessors delegate correctly.
- DEFAULT_CONFIG exposes the section with the right defaults.
- Integration: normalize_read_pagination clamps to the configured
max_lines.
* feat(skills): add design-md skill for Google's DESIGN.md spec
Built-in skill under skills/creative/ that teaches the agent to author,
lint, diff, and export DESIGN.md files — Google's open-source
(Apache-2.0) format for describing a visual identity to coding agents.
Covers:
- YAML front matter + markdown body anatomy
- Full token schema (colors, typography, rounded, spacing, components)
- Canonical section order + duplicate-heading rejection
- Component property whitelist + variants-as-siblings pattern
- CLI workflow via 'npx @google/design.md' (lint/diff/export/spec)
- Lint rule reference including WCAG contrast checks
- Common YAML pitfalls (quoted hex, negative dimensions, dotted refs)
- Starter template at templates/starter.md
Package verified live on npm (@google/design.md@0.1.1).
Two fixes on top of the fuzzy-@ branch:
(1) Rebase artefact: re-apply only the fuzzy additions on top of
fresh `tui_gateway/server.py`. The earlier commit was cut from a
base 58 commits behind main and clobbered ~170 lines of
voice.toggle / voice.record handlers and the gateway crash hooks
(`_panic_hook`, `_thread_panic_hook`). Reset server.py to
origin/main and re-add only:
- `_FUZZY_*` constants + `_list_repo_files` + `_fuzzy_basename_rank`
- the new fuzzy branch in the `complete.path` handler
(2) Path scoping (Copilot review): `git ls-files` returns repo-root-
relative paths, but completions need to resolve under the gateway's
cwd. When hermes is launched from a subdirectory, the previous
code surfaced `@file:apps/web/src/foo.tsx` even though the agent
would resolve that relative to `apps/web/` and miss. Fix:
- `git -C root rev-parse --show-toplevel` to get repo top
- `git -C top ls-files …` for the listing
- `os.path.relpath(top + p, root)` per result, dropping anything
starting with `../` so the picker stays scoped to cwd-and-below
(matches Cmd-P workspace semantics)
`apps/web/src/foo.tsx` ends up as `@file:src/foo.tsx` from inside
`apps/web/`, and sibling subtrees + parent-of-cwd files don't leak.
New test `test_fuzzy_paths_relative_to_cwd_inside_subdir` builds a
3-package mono-repo, runs from `apps/web/`, and verifies completion
paths are subtree-relative + outside-of-cwd files don't appear.
Copilot review threads addressed: #3134675504 (path scoping),
#3134675532 (`voice.toggle` regression), #3134675541 (`voice.record`
regression — both were stale-base artefacts, not behavioural changes).
Typing `@appChrome` in the composer should surface
`ui-tui/src/components/appChrome.tsx` without requiring the user to
first type the full directory path — matches the Cmd-P behaviour
users expect from modern editors.
The gateway's `complete.path` handler was doing a plain
`os.listdir(".")` + `startswith` prefix match, so basenames only
resolved inside the current working directory. This reworks it to:
- enumerate repo files via `git ls-files -z --cached --others
--exclude-standard` (fast, honours `.gitignore`); fall back to a
bounded `os.walk` that skips common vendor / build dirs when the
working dir isn't a git repo. Results cached per-root with a 5s
TTL so rapid keystrokes don't respawn git processes.
- rank basenames with a 5-tier scorer: exact → prefix → camelCase
/ word-boundary → substring → subsequence. Shorter basenames win
ties; shorter rel paths break basename-length ties.
- only take the fuzzy branch when the query is bare (no `/`), is a
context reference (`@...`), and isn't `@folder:` — path-ish
queries and folder tags fall through to the existing
directory-listing path so explicit navigation intent is
preserved.
Completion rows now carry `display = basename`,
`meta = directory`, so the picker renders
`appChrome.tsx ui-tui/src/components` on one row (basename bold,
directory dim) — the meta column was previously "dir" / "" and is
a more useful signal for fuzzy hits.
Reported by Ben Barclay during the TUI v2 blitz test.
The TUI had drifted from the CLI's voice model in two ways:
- /voice on was lighting up the microphone immediately and Ctrl+B was
interpreted as a mode toggle. The CLI separates the two: /voice on
just flips the umbrella bit, recording only starts once the user
presses Ctrl+B, which also sets _voice_continuous so the VAD loop
auto-restarts until the user presses Ctrl+B again or three silent
cycles pass.
- /voice tts was missing entirely, so users couldn't turn agent reply
speech on/off from inside the TUI.
This commit brings the TUI to parity.
Python
- hermes_cli/voice.py: continuous-mode API (start_continuous,
stop_continuous, is_continuous_active) layered on the existing PTT
wrappers. The silence callback transcribes, fires on_transcript,
tracks consecutive no-speech cycles, and auto-restarts — mirroring
cli.py:_voice_stop_and_transcribe + _restart_recording.
- tui_gateway/server.py:
- voice.toggle now supports on / off / tts / status. The umbrella
bit lives in HERMES_VOICE + display.voice_enabled; tts lives in
HERMES_VOICE_TTS + display.voice_tts. /voice off also tears down
any active continuous loop so a toggle-off really releases the
microphone.
- voice.record start/stop now drives start_continuous/stop_continuous.
start is refused with a clear error when the mode is off, matching
cli.py:handle_voice_record's early return on `not _voice_mode`.
- New voice.transcript / voice.status events emit through
_voice_emit (remembers the sid that last enabled the mode so
events land in the right session).
TypeScript
- gatewayTypes.ts: voice.status + voice.transcript event
discriminants; VoiceToggleResponse gains tts; VoiceRecordResponse
gains status for the new "started/stopped" responses.
- interfaces.ts: GatewayEventHandlerContext gains composer.setInput +
submission.submitRef + voice.{setRecording, setProcessing,
setVoiceEnabled}; InputHandlerContext.voice gains enabled +
setVoiceEnabled for the mode-aware Ctrl+B handler.
- createGatewayEventHandler.ts: voice.status drives REC/STT badges;
voice.transcript auto-submits when the composer is empty (CLI
_pending_input.put parity) and appends when a draft is in flight.
no_speech_limit flips voice off + sys line.
- useInputHandlers.ts: Ctrl+B now calls voice.record (start/stop),
not voice.toggle, and nudges the user with a sys line when the
mode is off instead of silently flipping it on.
- useMainApp.ts: wires the new event-handler context fields.
- slash/commands/session.ts: /voice handles on / off / tts / status
with CLI-matching output ("voice: mode on · tts off").
Backward compat preserved for voice.record (was always PTT shape;
gateway still honours start/stop with mode-gating added).
tui_gateway/server.py:3486/3491/3509 imports start_recording,
stop_and_transcribe, and speak_text from hermes_cli.voice, but the
module never existed (not in git history — never shipped, never
deleted). Every voice.record / voice.tts RPC call hit the ImportError
branch and the TUI surfaced it as "voice module not available — install
audio dependencies" even on boxes with sounddevice / faster-whisper /
numpy installed.
Adds a thin wrapper on top of tools.voice_mode (recording +
transcription) and tools.tts_tool (text-to-speech):
- start_recording() — idempotent; stores the active AudioRecorder in a
module-global guarded by a Lock so repeat Ctrl+B presses don't fight
over the mic.
- stop_and_transcribe() — returns None for no-op / no-speech /
Whisper-hallucination cases so the TUI's existing "no speech detected"
path keeps working unchanged.
- speak_text(text) — lazily imports tts_tool (optional provider SDKs
stay unloaded until the first /voice tts call), parses the tool's
JSON result, and plays the audio via play_audio_file.
Paired with the Ctrl+B keybinding fix in the prior commit, the TUI
voice pipeline now works end-to-end for the first time.
Fixes a broader class of 'tools.function.parameters is not a valid
moonshot flavored json schema' errors on Nous / OpenRouter aggregators
routing to moonshotai/kimi-k2.6 with MCP tools loaded.
## Moonshot sanitizer (agent/moonshot_schema.py, new)
Model-name-routed (not base-URL-routed) so Nous / OpenRouter users are
covered alongside api.moonshot.ai. Applied in
ChatCompletionsTransport.build_kwargs when is_moonshot_model(model).
Two repairs:
1. Fill missing 'type' on every property / items / anyOf-child schema
node (structural walk — only schema-position dicts are touched, not
container maps like properties/$defs).
2. Strip 'type' at anyOf parents; Moonshot rejects it.
## MCP normalizer hardened (tools/mcp_tool.py)
Draft-07 $ref rewrite from PR #14802 now also does:
- coerce missing / null 'type' on object-shaped nodes (salvages #4897)
- prune 'required' arrays to names that exist in 'properties'
(salvages #4651; Gemini 400s on dangling required)
- apply recursively, not just top-level
These repairs are provider-agnostic so the same MCP schema is valid on
OpenAI, Anthropic, Gemini, and Moonshot in one pass.
## Crash fix: safe getattr for Tool.inputSchema
_convert_mcp_schema now uses getattr(t, 'inputSchema', None) so MCP
servers whose Tool objects omit the attribute entirely no longer abort
registration (salvages #3882).
## Validation
- tests/agent/test_moonshot_schema.py: 27 new tests (model detection,
missing-type fill, anyOf-parent strip, non-mutation, real-world MCP
shape)
- tests/tools/test_mcp_tool.py: 7 new tests (missing / null type,
required pruning, nested repair, safe getattr)
- tests/agent/transports/test_chat_completions.py: 2 new integration
tests (Moonshot route sanitizes, non-Moonshot route doesn't)
- Targeted suite: 49 passed
- E2E via execute_code with a realistic MCP tool carrying all three
Moonshot rejection modes + dangling required + draft-07 refs:
sanitizer produces a schema valid on Moonshot and Gemini
Cron now resolves its toolset from the same per-platform config the
gateway uses — `_get_platform_tools(cfg, 'cron')` — instead of blindly
loading every default toolset. Existing cron jobs without a per-job
override automatically lose `moa`, `homeassistant`, and `rl` (the
`_DEFAULT_OFF_TOOLSETS` set), which stops the "surprise $4.63
mixture_of_agents run" class of bug (Norbert, Discord).
Precedence inside `run_job`:
1. per-job `enabled_toolsets` (PR #14767 / #6130) — wins if set
2. `_get_platform_tools(cfg, 'cron')` — new, the blanket gate
3. `None` fallback (legacy) — only on resolver exception
Changes:
- hermes_cli/platforms.py: register 'cron' with default_toolset
'hermes-cron'
- toolsets.py: add 'hermes-cron' toolset (mirrors 'hermes-cli';
`_get_platform_tools` then filters via `_DEFAULT_OFF_TOOLSETS`)
- cron/scheduler.py: add `_resolve_cron_enabled_toolsets(job, cfg)`,
call it at the `AIAgent(...)` kwargs site
- tests/cron/test_scheduler.py: replace the 'None when not set' test
(outdated contract) with an invariant ('moa not in default cron
toolset') + new per-job-wins precedence test
- tests/hermes_cli/test_tools_config.py: mark 'cron' as non-messaging
in the gateway-toolset-coverage test
Themes and plugins can now pull off arbitrary dashboard reskins (cockpit
HUD, retro terminal, etc.) without touching core code.
Themes gain four new fields:
- layoutVariant: standard | cockpit | tiled — shell layout selector
- assets: {bg, hero, logo, crest, sidebar, header, custom: {...}} —
artwork URLs exposed as --theme-asset-* CSS vars
- customCSS: raw CSS injected as a scoped <style> tag on theme apply
(32 KiB cap, cleaned up on theme switch)
- componentStyles: per-component CSS-var overrides (clipPath,
borderImage, background, boxShadow, ...) for card/header/sidebar/
backdrop/tab/progress/badge/footer/page
Plugin manifests gain three new fields:
- tab.override: replaces a built-in route instead of adding a tab
- tab.hidden: register component + slots without adding a nav entry
- slots: declares shell slots the plugin populates
10 named shell slots: backdrop, header-left/right/banner, sidebar,
pre-main, post-main, footer-left/right, overlay. Plugins register via
window.__HERMES_PLUGINS__.registerSlot(name, slot, Component). A
<PluginSlot> React helper is exported on the plugin SDK.
Ships a full demo at plugins/strike-freedom-cockpit/ — theme YAML +
slot-only plugin that reproduces a Gundam cockpit dashboard: MS-STATUS
sidebar with live telemetry, COMPASS crest in header, notched card
corners via componentStyles, scanline overlay via customCSS, gold/cyan
palette, Orbitron typography.
Validation:
- 15 new tests in test_web_server.py covering every extended field
- tests/hermes_cli/: 2615 passed (3 pre-existing unrelated failures)
- tsc -b --noEmit: clean
- vite build: 418 kB bundle, ~2 kB delta for slots/theme extensions
Co-authored-by: Teknium <p@nousresearch.com>
- AUTHOR_MAP entry for 130918800+devorun for #6636 attribution
- test_moa_defaults: was a change-detector tied to the exact frontier
model list — flips red every OpenRouter churn. Rewritten as an
invariant (non-empty, valid vendor/model slugs).
The agent-facing image_generate tool only passes prompt + aspect_ratio to
provider.generate() (see tools/image_generation_tool.py:953). The editing
block (reference_images / edit_image kwargs) could never fire from the
tool surface, and the xAI edits endpoint is /images/edits with a
different payload shape anyway — not /images/generations as submitted.
- Remove reference_images / edit_image kwargs handling from generate()
- Remove matching test_with_reference_images case
- Update docstring + plugin.yaml description to text-to-image only
- Surface resolution in the success extras
Follow-up to PR #14547. Tests: 18/18 pass.
Follow-up to Magaav's safe sync policy. Two gaps in the canonicalizer
caused false diffs or silent drift:
1. discord.py's AppCommand.to_dict() omits nsfw, dm_permission, and
default_member_permissions — those live only on attributes. The
canonicalizer was reading them via payload.get() and getting defaults
(False/True/None), while the desired side from Command.to_dict(tree)
had the real values. Any command using non-default permissions
false-diffed on every startup. Pull them from the AppCommand
attributes via _existing_command_to_payload().
2. contexts and integration_types weren't canonicalized at all, so
drift in either was silently ignored. Added both to
_canonicalize_app_command_payload (sorted for stable compare).
Also normalized default_member_permissions to str-or-None since the
server emits strings but discord.py stores ints locally.
Added regression tests for both gaps.
Replaces blind tree.sync() on every Discord reconnect with a diff-based
reconcile. In safe mode (default), fetch existing global commands,
compare desired vs existing payloads, skip unchanged, PATCH changed,
recreate when non-patchable metadata differs, POST missing, and delete
stale commands one-by-one. Keeps 'bulk' for legacy behavior and 'off'
to skip startup sync entirely.
Fixes restart-heavy workflows that burn Discord's command write budget
and can surface 429s when iterating on native slash commands.
Env var: DISCORD_COMMAND_SYNC_POLICY (safe|bulk|off), default 'safe'.
Co-authored-by: Codex <codex@openai.invalid>
- _stdio_pids: set → Dict[int,str] tracks pid→server_name
- SIGTERM-first with 2s grace before SIGKILL escalation
- hasattr guard for SIGKILL on platforms without it
- Updated tests for dict-based tracking and 3-phase kill sequence
The original regex only matched relative paths (./foo/.env or bare
.env), so the exact command from the bug report —
`cp /opt/data/.env.local /opt/data/.env` — did not trigger approval.
Broaden the leading-path prefix to accept an absolute leading slash
alongside ./ and ../, and add regressions for the bug-report command
and its redirection variant.
cmd_update no longer SIGKILLs in-flight agent runs, and users get
'still working' status every 3 min instead of 10. Two long-standing
sources of '@user — agent gives up mid-task' reports on Telegram and
other gateways.
Drain-aware update:
- New helper hermes_cli.gateway._graceful_restart_via_sigusr1(pid,
drain_timeout) sends SIGUSR1 to the gateway and polls os.kill(pid,
0) until the process exits or the budget expires.
- cmd_update's systemd loop now reads MainPID via 'systemctl show
--property=MainPID --value' and tries the graceful path first. The
gateway's existing SIGUSR1 handler -> request_restart(via_service=
True) -> drain -> exit(75) is wired in gateway/run.py and is
respawned by systemd's Restart=on-failure (and the explicit
RestartForceExitStatus=75 on newer units).
- Falls back to 'systemctl restart' when MainPID is unknown, the
drain budget elapses, or the unit doesn't respawn after exit (older
units missing Restart=on-failure). Old install behavior preserved.
- Drain budget = max(restart_drain_timeout, 30s) + 15s margin so the
drain loop in run_agent + final exit have room before fallback
fires. Composes with #14728's tool-subprocess reaping.
Notification interval:
- agent.gateway_notify_interval default 600 -> 180.
- HERMES_AGENT_NOTIFY_INTERVAL env-var fallback in gateway/run.py
matched.
- 9-minute weak-model spinning runs now ping at 3 min and 6 min
instead of 27 seconds before completion, removing the 'is the bot
dead?' reflex that drives gateway-restart cycles.
Tests:
- Two new tests in tests/hermes_cli/test_update_gateway_restart.py:
one asserts SIGUSR1 is sent and 'systemctl restart' is NOT called
when MainPID is known and the helper succeeds; one asserts the
fallback fires when the helper returns False.
- E2E: spawned detached bash processes confirm the helper returns
True on SIGUSR1-handling exit (~0.5s) and False on SIGUSR1-ignoring
processes (timeout). Verified non-existent PID and pid=0 edge cases.
- 41/41 in test_update_gateway_restart.py (was 39, +2 new).
- 154/154 in shutdown-related suites including #14728's new tests.
Reported by @GeoffWellman and @ANT_1515 on X.
Closes#11616.
The agent's API retry loop hardcoded max_retries = 3, so users with
fallback providers on flaky primaries burned through ~3 × provider
timeout (e.g. 3 × 180s = 9 minutes) before their fallback chain got a
chance to kick in.
Expose a new config key:
agent:
api_max_retries: 3 # default unchanged
Set it to 1 for fast failover when you have fallback providers, or
raise it if you prefer longer tolerance on a single provider. Values
< 1 are clamped to 1 (single attempt, no retry); non-integer values
fall back to the default.
This wraps the Hermes-level retry loop only — the OpenAI SDK's own
low-level retries (max_retries=2 default) still run beneath this for
transient network errors.
Changes:
- hermes_cli/config.py: add agent.api_max_retries default 3 with comment.
- run_agent.py: read self._api_max_retries in AIAgent.__init__; replace
hardcoded max_retries = 3 in the retry loop with self._api_max_retries.
- cli-config.yaml.example: documented example entry.
- hermes_cli/tips.py: discoverable tip line.
- tests/run_agent/test_api_max_retries_config.py: 4 tests covering
default, override, clamp-to-one, and invalid-value fallback.
Closes#8202.
Root cause: stop() reclaimed tool-call bash/sleep children only at the
very end of the shutdown sequence — after a 60s drain, 5s interrupt
grace, and per-adapter disconnect. Under systemd (TimeoutStopSec bounded
by drain_timeout), that meant the cgroup SIGKILL escalation fired first,
and systemd reaped the bash/sleep children instead of us.
Fix:
- Extract tool-subprocess cleanup into a local helper
_kill_tool_subprocesses() in _stop_impl().
- Invoke it eagerly right after _interrupt_running_agents() on the
drain-timeout path, before adapter disconnect.
- Keep the existing catch-all call at the end for the graceful path
and defense in depth against mid-teardown respawns.
- Bump generated systemd unit TimeoutStopSec to drain_timeout + 30s
so cleanup + disconnect + DB close has headroom above the drain
budget, matching the 'subprocess timeout > TimeoutStopSec + margin'
rule from the skill.
Tests:
- New: test_gateway_stop_kills_tool_subprocesses_before_adapter_disconnect_on_timeout
asserts kill_all() runs before disconnect() when drain times out.
- New: test_gateway_stop_kills_tool_subprocesses_on_graceful_path
guards that the final catch-all still fires when drain succeeds
(regression guard against accidental removal during refactor).
- Updated: existing systemd unit generator tests expect TimeoutStopSec=90
(= 60s drain + 30s headroom) with explanatory comment.
Previously delegate_task exposed 'max_iterations' in its JSON schema and used
`max_iterations or default_max_iter` — so a model guessing conservatively (or
copy-pasting a docstring hint like 'Only set lower for simple tasks') could
silently shrink a subagent's budget below the user's configured
delegation.max_iterations. One such call this session capped a deep forensic
audit at 40 iterations while the user's config was set to 250.
Changes:
- Drop 'max_iterations' from DELEGATE_TASK_SCHEMA['parameters']['properties'].
Models can no longer emit it.
- In delegate_task(): ignore any caller-supplied max_iterations, always use
delegation.max_iterations from config. Log at debug if a stale schema or
internal caller still passes one through.
- Keep the Python kwarg on the function signature for internal callers
(_build_child_agent tests pass it through the plumbing layer).
- Update test_schema_valid to assert the param is now absent (intentional
contract change, not a change-detector).
A test in tests/agent/test_credential_pool.py
(test_try_refresh_current_updates_only_current_entry) monkeypatched
refresh_codex_oauth_pure() to return the literal fixture strings
'access-new'/'refresh-new', then executed the real production code path
in agent/credential_pool.py::try_refresh_current which calls
_sync_device_code_entry_to_auth_store → _save_provider_state → writes
to `providers.openai-codex.tokens`. That writer resolves the target via
get_hermes_home()/auth.json. If the test ran with HERMES_HOME unset (direct
pytest invocation, IDE runner bypassing conftest discovery, or any other
sandbox escape), it would overwrite the real user's auth store with the
fixture strings.
Observed in the wild: Teknium's ~/.hermes/auth.json providers.openai-codex.tokens
held 'access-new'/'refresh-new' for five days. His CLI kept working because
the credential_pool entries still held real JWTs, but `hermes model`'s live
discovery path (which reads via resolve_codex_runtime_credentials →
_read_codex_tokens → providers.tokens) was silently 401-ing.
Fixes:
- Delete test_try_refresh_current_updates_only_current_entry. It was the
only test that exercised a writer hitting providers.openai-codex.tokens
with literal stub tokens. The entry-level rotation behavior it asserted
is still covered by test_mark_exhausted_and_rotate_persists_status above.
- Add a seat belt in hermes_cli.auth._auth_file_path(): if PYTEST_CURRENT_TEST
is set AND the resolved path equals the real ~/.hermes/auth.json, raise
with a clear message. In production (no PYTEST_CURRENT_TEST), a single
dict lookup. Any future test that forgets to monkeypatch HERMES_HOME
fails loudly instead of corrupting the user's credentials.
Validation:
- production (no PYTEST_CURRENT_TEST): returns real path, unchanged behavior
- pytest + HERMES_HOME unset (points at real home): raises with message
- pytest + HERMES_HOME=/tmp/...: returns tmp path, tests pass normally
Dashboard themes now control typography and layout, not just colors.
Each built-in theme picks its own fonts, base size, radius, and density
so switching produces visible changes beyond hue.
Schema additions (per theme):
- typography — fontSans, fontMono, fontDisplay, fontUrl, baseSize,
lineHeight, letterSpacing. fontUrl is injected as <link> on switch
so Google/Bunny/self-hosted stylesheets all work.
- layout — radius (any CSS length) and density
(compact | comfortable | spacious, multiplies Tailwind spacing).
- colorOverrides (optional) — pin individual shadcn tokens that would
otherwise derive from the palette.
Built-in themes are now distinct beyond palette:
- default — system stack, 15px, 0.5rem radius, comfortable
- midnight — Inter + JetBrains Mono, 14px, 0.75rem, comfortable
- ember — Spectral (serif) + IBM Plex Mono, 15px, 0.25rem
- mono — IBM Plex Sans + Mono, 13px, 0 radius, compact
- cyberpunk— Share Tech Mono everywhere, 14px, 0 radius, compact
- rose — Fraunces (serif) + DM Mono, 16px, 1rem, spacious
Also fixes two bugs:
1. Custom user themes silently fell back to default. ThemeProvider
only applied BUILTIN_THEMES[name], so YAML files in
~/.hermes/dashboard-themes/ showed in the picker but did nothing.
Server now ships the full normalised definition; client applies it.
2. Docs documented a 21-token flat colors schema that never matched
the code (applyPalette reads a 3-layer palette). Rewrote the
Themes section against the actual shape.
Implementation:
- web/src/themes/types.ts: extend DashboardTheme with typography,
layout, colorOverrides; ThemeListEntry carries optional definition.
- web/src/themes/presets.ts: 6 built-ins with distinct typography+layout.
- web/src/themes/context.tsx: applyTheme() writes palette+typography+
layout+overrides as CSS vars, injects fontUrl stylesheet, fixes the
fallback-to-default bug via resolveTheme(name).
- web/src/index.css: html/body/code read the new theme-font vars;
--radius-sm/md/lg/xl derive from --theme-radius; --spacing scales
with --theme-spacing-mul so Tailwind utilities shift with density.
- hermes_cli/web_server.py: _normalise_theme_definition() parses loose
YAML (bare hex strings, partial blocks) into the canonical wire
shape; /api/dashboard/themes ships full definitions for user themes.
- tests/hermes_cli/test_web_server.py: 16 new tests covering the
normaliser and discovery (rejection cases, clamping, defaults).
- website/docs/user-guide/features/web-dashboard.md: rewrite Themes
section with real schema, per-model tables, full YAML example.
Commit 43de1ca8 removed the _nr_to_assistant_message shim in favor of
duck-typed properties on the ToolCall dataclass. However, the
extra_content property (which carries the Gemini thought_signature) was
omitted from the ToolCall definition. This caused _build_assistant_message
to silently drop the signature via getattr(tc, 'extra_content', None)
returning None, leading to HTTP 400 errors on subsequent turns for all
Gemini 3 thinking models.
Add the extra_content property to ToolCall (matching the existing
call_id and response_item_id pattern) so the thought_signature round-trips
correctly through the transport → agent loop → API replay path.
Credit to @celttechie for identifying the root cause and providing the fix.
Closes#14488
## Merged
Adds MiMo v2.5-pro and v2.5 support to Xiaomi native provider, OpenCode Go, and setup wizard.
### Changes
- Context lengths: added v2.5-pro (1M) and v2.5 (1M), corrected existing MiMo entries to exact values (262144)
- Provider lists: xiaomi, opencode-go, setup wizard
- Vision: upgraded from mimo-v2-omni to mimo-v2.5 (omnimodal)
- Config description updated for XIAOMI_API_KEY
- Tests updated for new vision model preference
### Verification
- 4322 tests passed, 0 new regressions
- Live API tested on Xiaomi portal: basic, reasoning, tool calling, multi-tool, file ops, system prompt, vision — all pass
- Self-review found and fixed 2 issues (redundant vision check, stale HuggingFace context length)
Replaces the blanket 'always allow' change from the previous commit with
an opt-in config flag so users who want belt-and-suspenders security can
still get the keyword scan on skill_manage output.
## Default behavior (flag off)
skill_manage(action='create'|'edit'|'patch') no longer runs the keyword
scanner. The agent can write skills that mention risky keywords in prose
(documenting what reviewers should watch for, describing cache-bust
semantics in a PR-review skill, referencing AGENTS.md, etc.) without
getting blocked.
Rationale: the agent can already execute the same code paths via
terminal() with no gate, so the scan adds friction without meaningful
security against a compromised or malicious agent.
## Opt-in behavior (flag on)
Set skills.guard_agent_created: true in config.yaml to get the original
behavior back. Scanner runs on every skill_manage write; dangerous
verdicts surface as a tool error the agent can react to (retry without
the flagged content).
## External hub installs unaffected
trusted/community sources (hermes skills install) always get scanned
regardless of this flag. The gate is specifically for skill_manage,
which only agents call.
## Changes
- hermes_cli/config.py: add skills.guard_agent_created: False to DEFAULT_CONFIG
- tools/skill_manager_tool.py: _guard_agent_created_enabled() reads the flag;
_security_scan_skill() short-circuits to None when the flag is off
- tools/skills_guard.py: restore INSTALL_POLICY['agent-created'] =
('allow', 'allow', 'ask') so the scan remains strict when it does run
- tests/tools/test_skills_guard.py: restore original ask/force tests
- tests/tools/test_skill_manager_tool.py: new TestSecurityScanGate class
covering both flag states + config error handling
## Validation
- tests/tools/test_skills_guard.py + test_skill_manager_tool.py: 115/115 pass
- E2E: flagged-keyword skill creates with default config, blocks with flag on
The security scanner is meant to protect against hostile external skills
pulled from GitHub via hermes skills install — trusted/community policies
block or ask on dangerous verdicts accordingly. But agent-created skills
(from skill_manage) run in the same process as the agent that wrote them.
The agent can already execute the same code paths via terminal() with no
gate, so the ask-on-dangerous policy adds friction without meaningful
security.
Concrete trigger: an agent writing a PR-review skill that describes
cache-busting or persistence semantics in prose gets blocked because
those words appear in the patterns list. The skill isn't actually doing
anything dangerous — it's just documenting what reviewers should watch
for in other PRs.
Change: agent-created dangerous verdict maps to 'allow' instead of 'ask'.
External hub installs (trusted/community) keep their stricter policies
intact. Tests updated: renamed test_dangerous_agent_created_asks →
test_dangerous_agent_created_allowed; renamed force-override test and
updated assertion since force is now a no-op for agent-created (the allow
branch returns first).
Before this, _process_message_background's finally did an unconditional
'del self._active_sessions[session_key]' — even if a /stop/ /new
command had already swapped in its own command_guard via
_dispatch_active_session_command and cancelled us. The old task's
unwind would clobber the newer guard, opening a race for follow-ups.
Replace with _release_session_guard(session_key, guard=interrupt_event)
so the delete only fires when the guard we captured is still the one
installed. The sibling _session_tasks pop already had equivalent
ownership matching via asyncio.current_task() identity; this closes the
asymmetry.
Adds two direct regressions in test_session_split_brain_11016:
- stale guard reference must not clobber a newer guard by identity
- guard=None default still releases unconditionally (for callers that
don't have a captured guard to match against)
Refs #11016
Covers all three layers of the salvaged fix:
1. Adapter-side cancellation: /stop, /new, /reset cancel the in-flight
adapter task, release the guard, and let follow-up messages through;
/new keeps the guard installed until the runner response lands, then
drains the queued follow-up in order.
2. Adapter-side self-heal: a split-brain guard (done owner task, lock
still live) is healed on the next inbound message and the user gets
a reply instead of being trapped in infinite busy acks. A guard
with no recorded owner task is NOT auto-healed (protects fixtures
that install guards directly).
3. Runner-side generation guard: stale async runs whose generation was
bumped by /stop or /new cannot clear a newer run's _running_agents
slot on the way out.
11 tests, all green.
Refs #11016
The environment-snapshot login shell was auto-sourcing only ~/.bashrc when
building the PATH snapshot. On Debian/Ubuntu the default ~/.bashrc starts
with a non-interactive short-circuit:
case $- in *i*) ;; *) return;; esac
Sourcing it from a non-interactive shell returns before any PATH export
below that guard runs. Node version managers like n and nvm append their
PATH line under that guard, so Hermes was capturing a PATH without
~/n/bin — and the terminal tool saw 'node: command not found' even when
node was on the user's interactive shell PATH.
Expand the auto-source list (when auto_source_bashrc is on) to:
~/.profile → ~/.bash_profile → ~/.bashrc
~/.profile and ~/.bash_profile have no interactivity guard — installers
that write their PATH there (n's n-install, nvm's curl installer on most
setups) take effect. ~/.bashrc still runs last to preserve behaviour for
users who put PATH logic there without the guard.
Added two tests covering the new behaviour plus an E2E test that spins up
a real LocalEnvironment with a guard-prefixed ~/.bashrc and a ~/.profile
PATH export, and verifies the captured snapshot PATH contains the profile
entry.
On fresh RHEL/Debian SSH sessions without linger, `systemctl --user
start hermes-gateway` fails with 'Failed to connect to bus: No medium
found' because /run/user/$UID/bus doesn't exist. Setup previously
showed a raw CalledProcessError and continued claiming success, so the
gateway never actually started.
systemd_start() and systemd_restart() now call _preflight_user_systemd()
for the user scope first:
- Bus socket already there → no-op (desktop / linger-enabled servers)
- Linger off → try loginctl enable-linger (works when polkit permits,
needs sudo otherwise), wait for socket
- Still unreachable → raise UserSystemdUnavailableError with a clean
remediation message pointing to sudo loginctl + hermes gateway run
as the foreground fallback
Setup's start/restart handlers and gateway_command() catch the new
exception and render the multi-line guidance instead of a traceback.
When a newly-bundled skill's name collides with a pre-existing user
skill, sync silently kept the user's copy. Users never learned that
a bundled version shipped by that name.
Now (on non-quiet sync only) print:
⚠ <name>: bundled version shipped but you already have a local
skill by this name — yours was kept. Run `hermes skills reset
<name>` to replace it with the bundled version.
No behavior change to manifest writes or to the kept user copy —
purely additive warning on the existing collision-skip path.
When a new bundled skill's name collided with a pre-existing user skill
(from hub, custom, or leftover), sync_skills() recorded the bundled hash
in the manifest even though the on-disk copy was unrelated to bundled.
On the next sync, user_hash != origin_hash (bundled_hash) marked the
skill as "user-modified" permanently, blocking all bundled updates for
that skill until the user ran `hermes skills reset`.
Fix: only baseline the manifest entry when the user's on-disk copy is
byte-identical to bundled (safe to track — this is the reset re-sync or
coincidentally-identical install case). Otherwise skip the manifest
write entirely: the on-disk skill is unrelated to bundled and shouldn't
be tracked as if it were.
This preserves reset_bundled_skill()'s re-baseline flow (its post-delete
sync still writes to the manifest when user copy matches bundled) while
fixing the poisoning scenario for genuinely unrelated collisions.
Adds two tests following the existing test_failed_copy_does_not_poison_manifest
pattern: one verifying the manifest stays clean after a collision with
differing content, one verifying no false user_modified flag on resync.
Multiple custom_providers entries sharing the same base_url + api_key
are now grouped into a single picker row. A local Ollama host with
per-model display names ("Ollama — GLM 5.1", "Ollama — Qwen3-coder",
"Ollama — Kimi K2", "Ollama — MiniMax M2.7") previously produced four
near-duplicate picker rows that differed only by suffix; now it appears
as one "Ollama" row with four models.
Key changes:
- Grouping key changed from slug-by-name to (base_url, api_key). Names
frequently differ per model while the endpoint stays the same.
- When the grouped endpoint matches current_base_url, the row's slug is
set to current_provider so picker-driven switches route through the
live credential pipeline (no re-resolution needed).
- Per-model suffix is stripped from the display name ("Ollama — X" →
"Ollama") via em-dash / " - " separators.
- Two groups with different api_keys at the same base_url (or otherwise
colliding on cleaned name) are disambiguated with a numeric suffix
(custom:openai, custom:openai-2) so both stay visible.
- current_base_url parameter plumbed through both gateway call sites.
Existing #8216, #11499, #13509 regressions covered (dict/list shapes
of models:, section-3/section-4 dedup, normalized list-format entries).
Salvaged from @davidvv's PR #9210 — the underlying code had diverged
~1400 commits since that PR was opened, so this is a reconstruction of
the same approach on current main rather than a clean cherry-pick.
Authorship preserved via --author on this commit.
Closes#9210
_normalize_custom_provider_entry silently drops the models field when it's
a list. Hand-edited configs (and the shape used by older Hermes versions)
still write models as a plain list of ids, so after the normalize pass the
entry reaches list_authenticated_providers() with no models and /model
shows the provider with (0) models — even though the underlying picker
code handles lists fine.
Convert list-format models into the empty-value dict shape the rest of
the pipeline already expects. Dict-format entries keep passing through
unchanged.
Repro (before the fix):
custom_providers:
- name: acme
base_url: https://api.example.com/v1
models: [foo, bar, baz]
/model shows "acme (0)"; bypassing normalize in list_authenticated_providers
returns three models, confirming the drop happens in normalize.
Adds four unit tests covering list→dict conversion, dict pass-through,
filtering of empty/non-string entries, and the empty-list case.
NormalizedResponse and ToolCall now have backward-compat properties
so the agent loop can read them directly without the shim:
ToolCall: .type, .function (returns self), .call_id, .response_item_id
NormalizedResponse: .reasoning_content, .reasoning_details,
.codex_reasoning_items
This eliminates the 35-line shim and its 4 call sites in run_agent.py.
Also changes flush_memories guard from hasattr(response, 'choices')
to self.api_mode in ('chat_completions', 'bedrock_converse') so it
works with raw boto3 dicts too.
WS1 items 3+4 of Cycle 2 (#14418).
3-layer chain (transport → v2 → v1) was collapsed to 2-layer in PR 7.
This collapses the remaining 2-layer (transport → v1 → NR mapping in
transport) to 1-layer: v1 now returns NormalizedResponse directly.
Before: adapter returns (SimpleNamespace, finish_reason) tuple,
transport unpacks and maps to NormalizedResponse (22 lines).
After: adapter returns NormalizedResponse, transport is a
1-line passthrough.
Also updates ToolCall construction — adapter now creates ToolCall
dataclass directly instead of SimpleNamespace(id, type, function).
WS1 item 1 of Cycle 2 (#14418).
- Replace hardcoded 'fr' default with DEFAULT_LOCAL_STT_LANGUAGE ('en')
— removes locale leak, matches other providers
- Drop redundant default=True on is_truthy_value (dict .get already defaults)
- Update auto-detect comment to include 'xai' in the chain
- Fix docstring: 21 languages (match PR body + actual xAI API)
- Update test_sends_language_and_format to set HERMES_LOCAL_STT_LANGUAGE=fr
explicitly, since default is no longer 'fr'
All 18 xAI STT tests pass locally.
* feat(agent): add PLATFORM_HINTS for matrix, mattermost, and feishu
These platform adapters fully support media delivery (send_image,
send_document, send_voice, send_video) but were missing from
PLATFORM_HINTS, leaving agents unaware of their platform context,
markdown rendering, and MEDIA: tag support.
Salvaged from PR #7370 by Rutimka — wecom excluded since main already
has a more detailed version.
Co-Authored-By: Marco Rutsch <marco@rutimka.de>
* test: add missing Markdown assertion for feishu platform hint
---------
Co-authored-by: Marco Rutsch <marco@rutimka.de>
New built-in image_gen backend at plugins/image_gen/openai-codex/ that
exposes the same gpt-image-2 low/medium/high tier catalog as the
existing 'openai' plugin, but routes generation through the ChatGPT/
Codex Responses image_generation tool path. Available whenever the user
has Codex OAuth signed in; no OPENAI_API_KEY required.
The two plugins are independent — users select between them via
'hermes tools' → Image Generation, and image_gen.provider in
config.yaml. The existing 'openai' (API-key) plugin is unchanged.
Reuses _read_codex_access_token() and _codex_cloudflare_headers() from
agent.auxiliary_client so token expiry / cred-pool / Cloudflare
originator handling stays in one place.
Inspired by #14047 by @Hygaard, but re-implemented as a separate
plugin instead of an in-place fork of the openai plugin.
Closes#11195
Port from openai/codex#18646.
Adds two flags to 'hermes chat' that fully isolate a run from user-level
configuration and rules:
* --ignore-user-config: skip ~/.hermes/config.yaml and fall back to
built-in defaults. Credentials in .env are still loaded so the agent
can actually call a provider.
* --ignore-rules: skip auto-injection of AGENTS.md, SOUL.md,
.cursorrules, and persistent memory (maps to AIAgent(skip_context_files=True,
skip_memory=True)).
Primary use cases:
- Reproducible CI runs that should not pick up developer-local config
- Third-party integrations (e.g. Chronicle in Codex) that bring their
own config and don't want user preferences leaking in
- Bug-report reproduction without the reporter's personal overrides
- Debugging: bisect 'was it my config?' vs 'real bug' in one command
Both flags are registered on the parent parser AND the 'chat' subparser
(with argparse.SUPPRESS on the subparser to avoid overwriting the parent
value when the flag is placed before the subcommand, matching the
existing --yolo/--worktree/--pass-session-id pattern).
Env vars HERMES_IGNORE_USER_CONFIG=1 and HERMES_IGNORE_RULES=1 are set
by cmd_chat BEFORE 'from cli import main' runs, which is critical
because cli.py evaluates CLI_CONFIG = load_cli_config() at module import
time. The cli.py / hermes_cli.config.load_cli_config() function checks
the env var and skips ~/.hermes/config.yaml when set.
Tests: 11 new tests in tests/hermes_cli/test_ignore_user_config_flags.py
covering the env gate, constructor wiring, cmd_chat simulation, and
argparse flag registration. All pass; existing hermes_cli + cli suites
unaffected (3005 pass, 2 pre-existing unrelated failures).
Consolidate 4 per-transport lazy singleton helpers (_get_anthropic_transport,
_get_codex_transport, _get_chat_completions_transport, _get_bedrock_transport)
into one generic _get_transport(api_mode) with a shared dict cache.
Collapse the 65-line main normalize block (3 api_mode branches, each with
its own SimpleNamespace shim) into 7 lines: one _get_transport() call +
one _nr_to_assistant_message() shared shim. The shim extracts provider_data
fields (codex_reasoning_items, reasoning_details, call_id, response_item_id)
into the SimpleNamespace shape downstream code expects.
Wire chat_completions and bedrock_converse normalize through their transports
for the first time — these were previously falling into the raw
response.choices[0].message else branch.
Remove 8 dead codex adapter imports that have zero callers after PRs 1-6.
Transport lifecycle improvements:
- Eagerly warm transport cache at __init__ (surfaces import errors early)
- Invalidate transport cache on api_mode change (switch_model, fallback
activation, fallback restore, transport recovery) — prevents stale
transport after mid-session provider switch
run_agent.py: -32 net lines (11,988 -> 11,956).
PR 7 of the provider transport refactor.
Follow-up to the /resume and /branch cleanup in the previous commit:
/new is a conversation-boundary operation too, so session-scoped
dangerous-command approvals and /yolo state must not survive it.
Adds a scoped unit test for _clear_session_boundary_security_state that
also covers the /new path (which calls the same helper).
The Docker terminal backend runs containers with `--cap-drop ALL`
and re-adds only DAC_OVERRIDE, CHOWN, FOWNER. Since commit fee0e0d3
("run as non-root user, use virtualenv") the image entrypoint drops
from root to the `hermes` user via `gosu`, which requires CAP_SETUID
and CAP_SETGID. Without them every sandbox container exits
immediately with:
Dropping root privileges
error: failed switching to 'hermes': operation not permitted
Breaking every terminal/file tool invocation in `terminal.backend: docker`
mode.
Fix: add SETUID and SETGID to the cap-add list. The `no-new-privileges`
security-opt is kept, so gosu still cannot escalate back to root after
the one-way drop — the hardening posture is preserved.
Reproduction
------------
With any image whose ENTRYPOINT calls `gosu <user>`, the container
exits immediately under the pre-fix cap set. Post-fix, the drop
succeeds and the container proceeds normally.
docker run --rm \
--cap-drop ALL \
--cap-add DAC_OVERRIDE --cap-add CHOWN --cap-add FOWNER \
--security-opt no-new-privileges \
--entrypoint /usr/local/bin/gosu \
hermes-claude:latest hermes id
# -> error: failed switching to 'hermes': operation not permitted
# Same command with SETUID+SETGID added:
# -> uid=10000(hermes) gid=10000(hermes) groups=10000(hermes)
Tests
-----
Added `test_security_args_include_setuid_setgid_for_gosu_drop` that
asserts both caps are present and the overall hardening posture
(cap-drop ALL + no-new-privileges) is preserved.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Port from openclaw/openclaw#67318. Some open models (notably Gemma
variants served via OpenRouter) emit tool calls as XML blocks inside
assistant content instead of via the structured tool_calls field:
<function name="read_file"><parameter name="path">/tmp/x</parameter></function>
<tool_call>{"name":"x"}</tool_call>
<function_calls>[{...}]</function_calls>
Left unstripped, this raw XML leaked to gateway users (Discord, Telegram,
Matrix, Feishu, Signal, WhatsApp, etc.) and the CLI, since hermes-agent's
existing reasoning-tag stripper handled only <think>/<thinking>/<thought>
variants.
Extend _strip_think_blocks (run_agent.py) and _strip_reasoning_tags
(cli.py) to cover:
* <tool_call>, <tool_calls>, <tool_result>
* <function_call>, <function_calls>
* <function name="..."> ... </function> (Gemma-style)
The <function> variant is boundary-gated (only strips when the tag sits
at start-of-line or after sentence punctuation AND carries a name="..."
attribute) so prose mentions like 'Use <function> declarations in JS'
are preserved. Dangling <function name="..."> with no close is
intentionally left visible — matches OpenClaw's asymmetry so a truncated
streaming tail still reaches the user.
Tests: 9 new cases in TestStripThinkBlocks (run_agent) + 9 in new file
tests/run_agent/test_strip_reasoning_tags_cli.py. Covers Qwen-style
<tool_call>, Gemma-style <function name="...">, multi-line payloads,
prose preservation, stray close tags, dangling open tags, and mixed
reasoning+tool_call content.
Note: this port covers the post-streaming final-text path, which is what
gateway adapters and CLI display consume. Extending the per-delta stream
filter in gateway/stream_consumer.py to hide these tags live as they
stream is a separate follow-up; for now users may see raw XML briefly
during a stream before the final cleaned text replaces it.
Refs: openclaw/openclaw#67318
Feishu's open_id is app-scoped (same user gets different open_ids per
bot app), not a canonical identity. Functionally correct for single-bot
mode but semantically misleading.
- Add comprehensive Feishu identity model documentation to module docstring
- Prefer user_id (tenant-scoped) over open_id (app-scoped) in
_resolve_sender_profile when both are available
- Document bot_open_id usage for @mention matching
- Update user_id_alt comment in SessionSource to be platform-generic
Ref: closes analysis from PR #8388 (closed as over-scoped)
Port from openclaw/openclaw#66664. The build_anthropic_kwargs call site
used 'max_tokens or _get_anthropic_max_output(model)', which correctly
falls back when max_tokens is 0 or None (falsy) but lets negative ints
(-1, -500), fractional floats (0.5, 8192.7), NaN, and infinity leak
through to the Anthropic API. Anthropic rejects these with HTTP 400
('max_tokens: must be greater than or equal to 1'), turning a local
config error into a surprise mid-conversation failure.
Add two resolver helpers matching OpenClaw's:
_resolve_positive_anthropic_max_tokens — returns int(value) only if
value is a finite positive number; excludes bools, strings, NaN,
infinity, sub-one positives (floor to 0).
_resolve_anthropic_messages_max_tokens — prefers a positive requested
value, else falls back to the model's output ceiling; raises
ValueError only if no positive budget can be resolved.
The context-window clamp at the call site (max_tokens > context_length)
is preserved unchanged — it handles oversized values; the new resolver
handles non-positive values. These concerns are now cleanly separated.
Tests: 17 new cases covering positive/zero/negative ints, fractional
floats (both >1 and <1), NaN, infinity, booleans, strings, None, and
integration via build_anthropic_kwargs.
Refs: openclaw/openclaw#66664
Follow-up on helix4u's PR #14211:
- Flip default to true: narrowing toolsets=['web','browser'] expresses
'I want these extras', not 'silently strip MCP'. Parent MCP tools
(registered at runtime) should survive narrowing by default.
- Drop _config_version bump (22->23); additive nested key under
delegation.* is handled by _deep_merge, no migration needed.
- Update tests to reflect new default behavior.
Mid-stream SSL alerts (bad_record_mac, tls_alert_internal_error, handshake
failures) previously fell through the classifier pipeline to the 'unknown'
bucket because:
- ssl.SSLError type names weren't in _TRANSPORT_ERROR_TYPES (the
isinstance(OSError) catch picks up some but not all SDK-wrapped forms)
- the message-pattern list had no SSL alert substrings
The 'unknown' bucket is still retryable, but: (a) logs tell the user
'unknown' instead of identifying the cause, (b) it bypasses the
transport-specific backoff/fallback logic, and (c) if the SSL error
happens on a large session with a generic 'connection closed' wrapper,
the existing disconnect-on-large-session heuristic would incorrectly
trigger context compression — expensive, and never fixes a transport
hiccup.
Changes:
- Add ssl.SSLError and its subclass type names to _TRANSPORT_ERROR_TYPES
- New _SSL_TRANSIENT_PATTERNS list (separate from _SERVER_DISCONNECT_PATTERNS
so SSL alerts route to timeout, not context_overflow+compress)
- New step 5 in the classifier pipeline: SSL pattern check runs BEFORE
the disconnect check to pre-empt the large-session-compress path
Patterns cover both space-separated ('ssl alert', 'bad record mac')
and underscore-separated ('ERR_SSL_SSL/TLS_ALERT_BAD_RECORD_MAC')
forms. This is load-bearing because OpenSSL 3.x changed the error-code
separator from underscore to slash (e.g. SSLV3_ALERT_BAD_RECORD_MAC →
SSL/TLS_ALERT_BAD_RECORD_MAC) and will likely churn again — matching on
stable alert reason substrings survives future format changes.
Tests (8 new):
- BAD_RECORD_MAC in Python ssl.c format
- OpenSSL 3.x underscore format
- TLSV1_ALERT_INTERNAL_ERROR
- ssl handshake failure
- [SSL: ...] prefix fallback
- Real ssl.SSLError instance
- REGRESSION GUARD: SSL on large session does NOT compress
- REGRESSION GUARD: plain disconnect on large session STILL compresses
Port from cline/cline#10266.
When OpenAI-compatible proxies (OpenRouter, Vercel AI Gateway, Cline)
route Claude models, they sometimes surface the Anthropic-native cache
counters (`cache_read_input_tokens`, `cache_creation_input_tokens`) at
the top level of the `usage` object instead of nesting them inside
`prompt_tokens_details`. Our chat-completions branch of
`normalize_usage()` only read the nested `prompt_tokens_details` fields,
so those responses:
- reported `cache_write_tokens = 0` even when the model actually did a
prompt-cache write,
- reported only some of the cache-read tokens when the proxy exposed them
top-level only,
- overstated `input_tokens` by the missed cache-write amount, which in
turn made cost estimation and the status-bar cache-hit percentage wrong
for Claude traffic going through these gateways.
Now the chat-completions branch tries the OpenAI-standard
`prompt_tokens_details` first and falls back to the top-level
Anthropic-shape fields only if the nested values are absent/zero. The
Anthropic and Codex Responses branches are unchanged.
Regression guards added for three shapes: top-level write + nested read,
top-level-only, and both-present (nested wins).
New and newer models from models.dev now surface automatically in
/model (both hermes model CLI and the gateway Telegram/Discord picker)
for a curated set of secondary providers — no Hermes release required
when the registry publishes a new model.
Primary user-visible fix: on OpenCode Go, typing '/model mimo-v2.5-pro'
no longer silently fuzzy-corrects to 'mimo-v2-pro'. The exact match
against the merged models.dev catalog wins.
Scope (opt-in frozenset _MODELS_DEV_PREFERRED in hermes_cli/models.py):
opencode-go, opencode-zen, deepseek, kilocode, fireworks, mistral,
togetherai, cohere, perplexity, groq, nvidia, huggingface, zai,
gemini, google.
Explicitly NOT merged:
- openrouter and nous (never): curated list is already a hand-picked
subset / Portal is source of truth.
- xai, xiaomi, minimax, minimax-cn, kimi-coding, kimi-coding-cn,
alibaba, qwen-oauth (per-project decision to keep curated-only).
- providers with dedicated live-endpoint paths (copilot, anthropic,
ai-gateway, ollama-cloud, custom, stepfun, openai-codex) — those
paths already handle freshness themselves.
Changes:
- hermes_cli/models.py: add _MODELS_DEV_PREFERRED + _merge_with_models_dev
helper. provider_model_ids() branches on the set at its curated-fallback
return. Merge is models.dev-first, curated-only extras appended,
case-insensitive dedup, graceful fallback when models.dev is offline.
- hermes_cli/model_switch.py: list_authenticated_providers() calls the
same merge in both its code paths (PROVIDER_TO_MODELS_DEV loop +
HERMES_OVERLAYS loop). Picker AND validation-fallback both see
fresh entries.
- tests/hermes_cli/test_models_dev_preferred_merge.py (new): 13 tests —
merge-helper unit tests (empty/raise/order/dedup), opencode-go/zen
behavior, openrouter+nous explicitly guarded from merge.
- tests/hermes_cli/test_opencode_go_in_model_list.py: converted from
snapshot-style assertion to a behavior-based floor check, so it
doesn't break when models.dev publishes additional opencode-go
entries.
Addresses a report from @pfanis via Telegram: newer Xiaomi variants
on OpenCode Go weren't appearing in the /model picker, and /model
was silently routing requests for new variants to older ones.
Follow-up for salvaged PR #14179.
`_cleanup_invalid_pid_path` previously called `remove_pid_file()` for the
default PID path, but that helper defensively refuses to delete a PID file
whose pid field differs from `os.getpid()` (to protect --replace handoffs).
Every realistic stale-PID scenario is exactly that case: a crashed/Ctrl+C'd
gateway left behind a PID file owned by a now-dead foreign PID.
Once `get_running_pid()` has confirmed the runtime lock is inactive, the
on-disk metadata is known to belong to a dead process, so we can force-unlink
both the PID file and the sibling `gateway.lock` directly instead of going
through the defensive helper.
Also adds a regression test with a dead foreign PID that would have failed
against the previous cleanup logic.
Plugin slash commands now surface as first-class commands in every gateway
enumerator — Discord native slash picker, Telegram BotCommand menu, Slack
/hermes subcommand map — without a separate per-platform plugin API.
The existing 'command:<name>' gateway hook gains a decision protocol via
HookRegistry.emit_collect(): handlers that return a dict with
{'decision': 'deny'|'handled'|'rewrite'|'allow'} can intercept slash
command dispatch before core handling runs, unifying what would otherwise
have been a parallel 'pre_gateway_command' hook surface.
Changes:
- gateway/hooks.py: add HookRegistry.emit_collect() that fires the same
handler set as emit() but collects non-None return values. Backward
compatible — fire-and-forget telemetry hooks still work via emit().
- hermes_cli/plugins.py: add optional 'args_hint' param to
register_command() so plugins can opt into argument-aware native UI
registration (Discord arg picker, future platforms).
- hermes_cli/commands.py: add _iter_plugin_command_entries() helper and
merge plugin commands into telegram_bot_commands() and
slack_subcommand_map(). New is_gateway_known_command() recognizes both
built-in and plugin commands so the gateway hook fires for either.
- gateway/platforms/discord.py: extract _build_auto_slash_command helper
from the COMMAND_REGISTRY auto-register loop and reuse it for
plugin-registered commands. Built-in name conflicts are skipped.
- gateway/run.py: before normal slash dispatch, call emit_collect on
command:<canonical> and honor deny/handled/rewrite/allow decisions.
Hook now fires for plugin commands too.
- scripts/release.py: AUTHOR_MAP entry for @Magaav.
- Tests: emit_collect semantics, plugin command surfacing per platform,
decision protocol (deny/handled/rewrite/allow + non-dict tolerance),
Discord plugin auto-registration + conflict skipping, is_gateway_known_command.
Salvaged from #14131 (@Magaav). Original PR added a parallel
'pre_gateway_command' hook and a platform-keyed plugin command
registry; this re-implementation reuses the existing 'command:<name>'
hook and treats plugin commands as platform-agnostic so the same
capability reaches Telegram and Slack without new API surface.
Co-authored-by: Magaav <73175452+Magaav@users.noreply.github.com>
`is_local_endpoint()` leaned on `ipaddress.is_private`, which classifies
RFC-1918 ranges and link-local as private but deliberately excludes the
RFC 6598 CGNAT block (100.64.0.0/10) — the range Tailscale uses for its
mesh IPs. As a result, Ollama reached over Tailscale (e.g.
`http://100.77.243.5:11434`) was treated as remote and missed the
automatic stream-read / stale-stream timeout bumps, so cold model load
plus long prefill would trip the 300 s watchdog before the first token.
Add a module-level `_TAILSCALE_CGNAT = ipaddress.IPv4Network("100.64.0.0/10")`
(built once) and extend `is_local_endpoint()` to match the block both
via the parsed-`IPv4Address` path and the existing bare-string fallback
(for symmetry with the 10/172/192 checks). Also hoist the previously
function-local `import ipaddress` to module scope now that it's used by
the constant.
Extend `TestIsLocalEndpoint` with a CGNAT positive set (lower bound,
representative host, MagicDNS anchor, upper bound) and a near-miss
negative set (just below 100.64.0.0, just above 100.127.255.255, well
outside the block, and first-octet-wrong).
Resolve Feishu @_user_N / @_all placeholders into display names plus a
structured [Mentioned: Name (open_id=...), ...] hint so agents can both
reason about who was mentioned and call Feishu OpenAPI tools with stable
open_ids. Strip bot self-mentions only at message edges (leading
unconditionally, trailing only before whitespace/terminal punctuation)
so commands parse cleanly while mid-text references are preserved.
Covers both plain-text and rich-post payloads.
Also fixes a pre-existing hydration bug: Client.request no longer accepts
the 'method' kwarg on lark-oapi 1.5.3, so bot identity silently failed
to hydrate and self-filtering never worked. Migrate to the
BaseRequest.builder() pattern and accept the 'app_name' field the API
actually returns. Tighten identity matching precedence so open_id is
authoritative when present on both sides.
Adds security.allow_private_urls / HERMES_ALLOW_PRIVATE_URLS toggle so
users on OpenWrt routers, TUN-mode proxies (Clash/Mihomo/Sing-box),
corporate split-tunnel VPNs, and Tailscale networks — where DNS resolves
public domains to 198.18.0.0/15 or 100.64.0.0/10 — can use web_extract,
browser, vision URL fetching, and gateway media downloads.
Single toggle in tools/url_safety.py; all 23 is_safe_url() call sites
inherit automatically. Cached for process lifetime.
Cloud metadata endpoints stay ALWAYS blocked regardless of the toggle:
169.254.169.254 (AWS/GCP/Azure/DO/Oracle), 169.254.170.2 (AWS ECS task
IAM creds), 169.254.169.253 (Azure IMDS wire server), 100.100.100.200
(Alibaba), fd00:ec2::254 (AWS IPv6), the entire 169.254.0.0/16
link-local range, and the metadata.google.internal / metadata.goog
hostnames (checked pre-DNS so they can't be bypassed on networks where
those names resolve to local IPs).
Supersedes #3779 (narrower HERMES_ALLOW_RFC2544 for the same class of
users).
Co-authored-by: kshitijk4poor <82637225+kshitijk4poor@users.noreply.github.com>
Anthropic's API can legitimately return content=[] with stop_reason="end_turn"
when the model has nothing more to add after a turn that already delivered the
user-facing text alongside a trivial tool call (e.g. memory write). The transport
validator was treating that as an invalid response, triggering 3 retries that
each returned the same valid-but-empty response, then failing the run with
"Invalid API response after 3 retries."
The downstream normalizer already handles empty content correctly (empty loop
over response.content, content=None, finish_reason="stop"), so the only fix
needed is at the validator boundary.
Tests:
- Empty content + stop_reason="end_turn" → valid (the fix)
- Empty content + stop_reason="tool_use" → still invalid (regression guard)
- Empty content without stop_reason → still invalid (existing behavior preserved)
When the streaming connection dropped AFTER user-visible text was
delivered but a tool call was in flight, we stubbed the turn with a
'⚠ Stream stalled mid tool-call; Ask me to retry' warning — costing
an iteration and breaking the flow. Users report this happening
increasingly often on long SSE streams through flaky provider routes.
Fix: in the existing inner stream-retry loop, relax the
deltas_were_sent short-circuit. If a tool call was in flight
(partial_tool_names populated) AND the error is a transient connection
error (timeout, RemoteProtocolError, SSE 'connection lost', etc.),
silently retry instead of bailing out. Fire a brief 'Connection
dropped mid tool-call; reconnecting…' marker so the user understands
the preamble is about to be re-streamed.
Researched how Claude Code (tombstone + non-streaming fallback),
OpenCode (blind Effect.retry wrapping whole stream), and Clawdbot
(4-way gate: stopReason==error + output==0 + !hadPotentialSideEffects)
handle this. Chose the narrow Clawdbot-style gate: retry only when
(a) a tool call was actually in flight (otherwise the existing
stub-with-recovered-text is correct for pure-text stalls) and
(b) the error is transient. Side-effect safety is automatic — no
tool has been dispatched within this single API call yet.
UX trade-off: user sees preamble text twice on retry (OpenCode-style).
Strictly better than a lost action with a 'retry manually' message.
If retries exhaust, falls through to the existing stub-with-warning
path so the user isn't left with zero signal.
Tests: 3 new tests in TestSilentRetryMidToolCall covering
(1) silent retry recovers tool call; (2) exhausted retries fall back
to stub; (3) text-only stalls don't trigger retry. 30/30 pass.
Copilot on #14138 flagged that the share report says '(file not found)'
when the log exists but is empty (either because the primary is empty
and no .1 rotation exists, or in the rare race where the file is
truncated between _resolve_log_path() and stat()).
- Split _primary_log_path() out of _resolve_log_path so both can share
the LOG_FILES/home math without duplication.
- _capture_log_snapshot now reports '(file empty)' when the primary
path exists on disk with zero bytes, and keeps '(file not found)'
for the truly-missing case.
Tests: rename test_returns_none_for_empty → test_empty_primary_reports_file_empty
with the new assertion, plus a race-path test that monkeypatches
_resolve_log_path to exercise the size==0 branch directly.
- normalizeStatusBar: trim/lowercase + 'on' → 'top' alias so user-edited
YAML variants (Top, " bottom ", on) coerce correctly
- shift-tab yolo: no-op with sys note when no live session; success-gated
echo and catch fallback so RPC failures don't report as 'yolo off'
- tui_gateway config.set/get statusbar: isinstance(display, dict) guards
mirroring the compact branch so a malformed display scalar in config.yaml
can't raise
Tests: +1 vitest for trim/case/on, +2 pytest for non-dict display survival.
Drop rebased test assumptions about theme-mode helpers removed on main and keep the status bar skin integration aligned with the current skin engine model.
Route prompt_toolkit status bar colors through the skin engine so /skin updates the status bar alongside the rest of the interactive TUI.
Add regression coverage for the new status bar style override keys and CLI style composition.
These thin wrappers around _capture_log_snapshot had zero production
callers after the snapshot refactor — run_debug_share uses snapshots
directly and collect_debug_report captures internally. The wrappers
also caused a performance regression: _read_log_tail read up to 512KB
and built full_text just to return tail_text.
Remove both wrappers and migrate TestReadFullLog → TestCaptureLogSnapshot
to test _capture_log_snapshot directly. Same coverage, tests the real
API instead of dead indirection.
Add missing AUTHOR_MAP entry for taosiyuan163 whose truncation boundary
fix was adapted into _capture_log_snapshot().
Add regression tests proving: line-boundary truncation keeps the full
first line, mid-line truncation correctly drops the partial fragment.
Adds _reactions_enabled() gating to match Discord (DISCORD_REACTIONS) and
Telegram (TELEGRAM_REACTIONS) pattern. Defaults to true to preserve existing
behavior. Gates at three levels:
- _handle_slack_message: skips _reacting_message_ids registration
- on_processing_start: early return
- on_processing_complete: early return
Also adds config.yaml bridge (slack.reactions) and two new tests.
Slack reactions were placed around handle_message(), which returns
immediately after spawning a background task. This caused the 👀
→ ✅ swap to happen before any real work began.
Fix: implement on_processing_start / on_processing_complete callbacks
(matching Discord/Telegram) so reactions bracket actual _message_handler
work driven by the base class.
Also fixes missing stop_typing() for Slack's assistant thread status
indicator, which left 'is thinking...' stuck in the UI after processing
completed.
- Add _reacting_message_ids set for DM/@mention-only gating
- Add _active_status_threads dict for stop_typing lookup
- Update test_reactions_in_message_flow for new callback pattern
- Add test_reactions_failure_outcome and test_reactions_skipped_for_non_dm_non_mention
The 404 branch in _classify_by_status had dead code: the generic
fallback below the _MODEL_NOT_FOUND_PATTERNS check returned the
exact same classification (model_not_found + should_fallback=True),
so every 404 — regardless of message — was treated as a missing model.
This bites local-endpoint users (llama.cpp, Ollama, vLLM) whose 404s
usually mean a wrong endpoint path, proxy routing glitch, or transient
backend issue — not a missing model. Claiming 'model not found' misleads
the next turn and silently falls back to another provider when the real
problem was a URL typo the user should see.
Fix: only classify 404 as model_not_found when the message actually
matches _MODEL_NOT_FOUND_PATTERNS ("invalid model", "model not found",
etc.). Otherwise fall through as unknown (retryable) so the real error
surfaces in the retry loop.
Test updated to match the new behavior. 103 error_classifier tests pass.
* fix(plugins): auto-coerce user-installed memory plugins to kind=exclusive
User-installed memory provider plugins at $HERMES_HOME/plugins/<name>/
were being dispatched to the general PluginManager, which has no
register_memory_provider method on PluginContext. Every startup logged:
Failed to load plugin 'mempalace': 'PluginContext' object has no
attribute 'register_memory_provider'
Bundled memory providers were already skipped via skip_names={memory,
context_engine} in discover_and_load, but user-installed ones weren't.
Fix: _parse_manifest now scans the plugin's __init__.py source for
'register_memory_provider' or 'MemoryProvider' (same heuristic as
plugins/memory/__init__.py:_is_memory_provider_dir) and auto-coerces
kind to 'exclusive' when the manifest didn't declare one explicitly.
This routes the plugin to plugins/memory discovery instead of the
general loader.
The escape hatch: if a manifest explicitly declares kind: standalone,
the heuristic doesn't override it.
Reported by Uncle HODL on Discord.
* fix(nous): actionable CLI message when Nous 401 refresh fails
Mirrors the Anthropic 401 diagnostic pattern. When Nous returns 401
and the credential refresh (_try_refresh_nous_client_credentials)
also fails, the user used to see only the raw APIError. Now prints:
🔐 Nous 401 — Portal authentication failed.
Response: <truncated body>
Most likely: Portal OAuth expired, account out of credits, or
agent key revoked.
Troubleshooting:
• Re-authenticate: hermes login --provider nous
• Check credits / billing: https://portal.nousresearch.com
• Verify stored credentials: $HERMES_HOME/auth.json
• Switch providers temporarily: /model <model> --provider openrouter
Addresses the common 'my hermes model hangs' pattern where the user's
Portal OAuth expired and the CLI gave no hint about the next step.
Adds schema v7 'api_call_count' column. run_agent.py increments it by 1
per LLM API call, web_server analytics SQL aggregates it, frontend uses
the real counter instead of summing sessions.
The 'API Calls' card on the analytics dashboard previously displayed
COUNT(*) from the sessions table — the number of conversations, not
LLM requests. Each session makes 10-90 API calls through the tool loop,
so the reported number was ~30x lower than real.
Salvaged from PR #10140 (@kshitijk4poor). The cache-token accuracy
portions of the original PR were deferred — per-provider analytics is
the better path there, since cache_write_tokens and actual_cost_usd
are only reliably available from a subset of providers (Anthropic
native, Codex Responses, OpenRouter with usage.include).
Tests:
- schema_version v7 assertion
- migration v2 -> v7 adds api_call_count column with default 0
- update_token_counts increments api_call_count by provided delta
- absolute=True sets api_call_count directly
- /api/analytics/usage exposes total_api_calls in totals
- Add configurable retain_tags / retain_source / retain_user_prefix /
retain_assistant_prefix knobs for native Hindsight.
- Thread gateway session identity (user_name, chat_id, chat_name,
chat_type, thread_id) through AIAgent and MemoryManager into
MemoryProvider.initialize kwargs so providers can scope and tag
retained memories.
- Hindsight attaches the new identity fields as retain metadata,
merges per-call tool tags with configured default tags, and uses
the configurable transcript labels for auto-retained turns.
Co-authored-by: Abner <abner.the.foreman@agentmail.to>
* feat(state): auto-prune old sessions + VACUUM state.db at startup
state.db accumulates every session, message, and FTS5 index entry forever.
A heavy user (gateway + cron) reported 384MB with 982 sessions / 68K messages
causing slowdown; manual 'hermes sessions prune --older-than 7' + VACUUM
brought it to 43MB. The prune command and VACUUM are not wired to run
automatically anywhere — sessions grew unbounded until users noticed.
Changes:
- hermes_state.py: new state_meta key/value table, vacuum() method, and
maybe_auto_prune_and_vacuum() — idempotent via last-run timestamp in
state_meta so it only actually executes once per min_interval_hours
across all Hermes processes for a given HERMES_HOME. Never raises.
- hermes_cli/config.py: new 'sessions:' block in DEFAULT_CONFIG
(auto_prune=True, retention_days=90, vacuum_after_prune=True,
min_interval_hours=24). Added to _KNOWN_ROOT_KEYS.
- cli.py: call maintenance once at HermesCLI init (shared helper
_run_state_db_auto_maintenance reads config and delegates to DB).
- gateway/run.py: call maintenance once at GatewayRunner init.
- Docs: user-guide/sessions.md rewrites 'Automatic Cleanup' section.
Why VACUUM matters: SQLite does NOT shrink the file on DELETE — freed
pages get reused on next INSERT. Without VACUUM, a delete-heavy DB stays
bloated forever. VACUUM only runs when the prune actually removed rows,
so tight DBs don't pay the I/O cost.
Tests: 10 new tests in tests/test_hermes_state.py covering state_meta,
vacuum, idempotency, interval skipping, VACUUM-only-when-needed,
corrupt-marker recovery. All 246 existing state/config/gateway tests
still pass.
Verified E2E with real imports + isolated HERMES_HOME: DEFAULT_CONFIG
exposes the new block, load_config() returns it for fresh installs,
first call prunes+vacuums, second call within min_interval_hours skips,
and the state_meta marker persists across connection close/reopen.
* sessions.auto_prune defaults to false (opt-in)
Session history powers session_search recall across past conversations,
so silently pruning on startup could surprise users. Ship the machinery
disabled and let users opt in when they notice state.db is hurting
performance.
- DEFAULT_CONFIG.sessions.auto_prune: True → False
- Call-site fallbacks in cli.py and gateway/run.py match the new default
(so unmigrated configs still see off)
- Docs: flip 'Enable in config.yaml' framing + tip explains the tradeoff
Adds a first-class 'stepfun' API-key provider surfaced as Step Plan:
- Support Step Plan setup for both International and China regions
- Discover Step Plan models live from /step_plan/v1/models, with a
small coding-focused fallback catalog when discovery is unavailable
- Thread StepFun through provider metadata, setup persistence, status
and doctor output, auxiliary routing, and model normalization
- Add tests for provider resolution, model validation, metadata
mapping, and StepFun region/model persistence
Based on #6005 by @hengm3467.
Co-authored-by: hengm3467 <100685635+hengm3467@users.noreply.github.com>
* feat(plugins): pluggable image_gen backends + OpenAI provider
Adds a ImageGenProvider ABC so image generation backends register as
bundled plugins under `plugins/image_gen/<name>/`. The plugin scanner
gains three primitives to make this work generically:
- `kind:` manifest field (`standalone` | `backend` | `exclusive`).
Bundled `kind: backend` plugins auto-load — no `plugins.enabled`
incantation. User-installed backends stay opt-in.
- Path-derived keys: `plugins/image_gen/openai/` gets key
`image_gen/openai`, so a future `tts/openai` cannot collide.
- Depth-2 recursion into category namespaces (parent dirs without a
`plugin.yaml` of their own).
Includes `OpenAIImageGenProvider` as the first consumer (gpt-image-1.5
default, plus gpt-image-1, gpt-image-1-mini, DALL-E 3/2). Base64
responses save to `$HERMES_HOME/cache/images/`; URL responses pass
through.
FAL stays in-tree for this PR — a follow-up ports it into
`plugins/image_gen/fal/` so the in-tree `image_generation_tool.py`
slims down. The dispatch shim in `_handle_image_generate` only fires
when `image_gen.provider` is explicitly set to a non-FAL value, so
existing FAL setups are untouched.
- 41 unit tests (scanner recursion, kind parsing, gate logic,
registry, OpenAI payload shapes)
- E2E smoke verified: bundled plugin autoloads, registers, and
`_handle_image_generate` routes to OpenAI when configured
* fix(image_gen/openai): don't send response_format to gpt-image-*
The live API rejects it: 'Unknown parameter: response_format'
(verified 2026-04-21 with gpt-image-1.5). gpt-image-* models return
b64_json unconditionally, so the parameter was both unnecessary and
actively broken.
* feat(image_gen/openai): gpt-image-2 only, drop legacy catalog
gpt-image-2 is the latest/best OpenAI image model (released 2026-04-21)
and there's no reason to expose the older gpt-image-1.5 / gpt-image-1 /
dall-e-3 / dall-e-2 alongside it — slower, lower quality, or awkward
(dall-e-2 squares only). Trim the catalog down to a single model.
Live-verified end-to-end: landscape 1536x1024 render of a Moog-style
synth matches prompt exactly, 2.4MB PNG saved to cache.
* feat(image_gen/openai): expose gpt-image-2 as three quality tiers
Users pick speed/fidelity via the normal model picker instead of a
hidden quality knob. All three tier IDs resolve to the single underlying
gpt-image-2 API model with a different quality parameter:
gpt-image-2-low ~15s fast iteration
gpt-image-2-medium ~40s default
gpt-image-2-high ~2min highest fidelity
Live-measured on OpenAI's API today: 15.4s / 40.8s / 116.9s for the
same 1024x1024 prompt.
Config:
image_gen.openai.model: gpt-image-2-high
# or
image_gen.model: gpt-image-2-low
# or env var for scripts/tests
OPENAI_IMAGE_MODEL=gpt-image-2-medium
Live-verified end-to-end with the low tier: 18.8s landscape render of a
golden retriever in wildflowers, vision-confirmed exact match.
* feat(tools_config): plugin image_gen providers inject themselves into picker
'hermes tools' → Image Generation now shows plugin-registered backends
alongside Nous Subscription and FAL.ai without tools_config.py needing
to know about them. OpenAI appears as a third option today; future
backends appear automatically as they're added.
Mechanism:
- ImageGenProvider gains an optional get_setup_schema() hook
(name, badge, tag, env_vars). Default derived from display_name.
- tools_config._plugin_image_gen_providers() pulls the schemas from
every registered non-FAL plugin provider.
- _visible_providers() appends those rows when rendering the Image
Generation category.
- _configure_provider() handles the new image_gen_plugin_name marker:
writes image_gen.provider and routes to the plugin's list_models()
catalog for the model picker.
- _toolset_needs_configuration_prompt('image_gen') stops demanding a
FAL key when any plugin provider reports is_available().
FAL is skipped in the plugin path because it already has hardcoded
TOOL_CATEGORIES rows — when it gets ported to a plugin in a follow-up
PR the hardcoded rows go away and it surfaces through the same path
as OpenAI.
Verified live: picker shows Nous Subscription / FAL.ai / OpenAI.
Picking OpenAI prompts for OPENAI_API_KEY, then shows the
gpt-image-2-low/medium/high model picker sourced from the plugin.
397 tests pass across plugins/, tools_config, registry, and picker.
* fix(image_gen): close final gaps for plugin-backend parity with FAL
Two small places that still hardcoded FAL:
- hermes_cli/setup.py status line: an OpenAI-only setup showed
'Image Generation: missing FAL_KEY'. Now probes plugin providers
and reports '(OpenAI)' when one is_available() — or falls back to
'missing FAL_KEY or OPENAI_API_KEY' if nothing is configured.
- image_generate tool schema description: said 'using FAL.ai, default
FLUX 2 Klein 9B'. Rewrote provider-neutral — 'backend and model are
user-configured' — and notes the 'image' field can be a URL or an
absolute path, which the gateway delivers either way via
extract_local_files().
Kimi's /coding endpoint speaks the Anthropic Messages protocol but has
its own thinking semantics: when thinking.enabled is sent, Kimi validates
the history and requires every prior assistant tool-call message to carry
OpenAI-style reasoning_content. The Anthropic path never populates that
field, and convert_messages_to_anthropic strips Anthropic thinking blocks
on third-party endpoints — so after one tool-calling turn the next request
fails with:
HTTP 400: thinking is enabled but reasoning_content is missing in
assistant tool call message at index N
Kimi on chat_completions handles thinking via extra_body in
ChatCompletionsTransport (#13503). On the Anthropic route, drop the
parameter entirely and let Kimi drive reasoning server-side.
build_anthropic_kwargs now gates the reasoning_config -> thinking block
on not _is_kimi_coding_endpoint(base_url).
Tests: 8 new parametric tests cover /coding, /coding/v1, /coding/anthropic,
/coding/ (trailing slash), explicit disabled, other third-party endpoints
still getting thinking (MiniMax), native Anthropic unaffected, and the
non-/coding Kimi root route.
Fourth and final transport — completes the transport layer with all four
api_modes covered. Wraps agent/bedrock_adapter.py behind the ProviderTransport
ABC, handles both raw boto3 dicts and already-normalized SimpleNamespace.
Wires all transport methods to production paths in run_agent.py:
- build_kwargs: _build_api_kwargs bedrock branch
- validate_response: response validation, new bedrock_converse branch
- finish_reason: new bedrock_converse branch in finish_reason extraction
Based on PR #13467 by @kshitijk4poor, with one adjustment: the main normalize
loop does NOT add a bedrock_converse branch to invoke normalize_response on
the already-normalized response. Bedrock's normalize_converse_response runs
at the dispatch site (run_agent.py:5189), so the response already has the
OpenAI-compatible .choices[0].message shape by the time the main loop sees
it. Falling through to the chat_completions else branch is correct and
sidesteps a redundant NormalizedResponse rebuild.
Transport coverage — complete:
| api_mode | Transport | build_kwargs | normalize | validate |
|--------------------|--------------------------|:------------:|:---------:|:--------:|
| anthropic_messages | AnthropicTransport | ✅ | ✅ | ✅ |
| codex_responses | ResponsesApiTransport | ✅ | ✅ | ✅ |
| chat_completions | ChatCompletionsTransport | ✅ | ✅ | ✅ |
| bedrock_converse | BedrockTransport | ✅ | ✅ | ✅ |
17 new BedrockTransport tests pass. 117 transport tests total pass.
160 bedrock/converse tests across tests/agent/ pass. Full tests/run_agent/
targeted suite passes (885/885 + 15 skipped; the 1 remaining failure is the
pre-existing test_concurrent_interrupt flake on origin/main).
Third concrete transport — handles the default 'chat_completions' api_mode used
by ~16 OpenAI-compatible providers (OpenRouter, Nous, NVIDIA, Qwen, Ollama,
DeepSeek, xAI, Kimi, custom, etc.). Wires build_kwargs + validate_response to
production paths.
Based on PR #13447 by @kshitijk4poor, with fixes:
- Preserve tool_call.extra_content (Gemini thought_signature) via
ToolCall.provider_data — the original shim stripped it, causing 400 errors
on multi-turn Gemini 3 thinking requests.
- Preserve reasoning_content distinctly from reasoning (DeepSeek/Moonshot) so
the thinking-prefill retry check (_has_structured) still triggers.
- Port Kimi/Moonshot quirks (32000 max_tokens, top-level reasoning_effort,
extra_body.thinking) that landed on main after the original PR was opened.
- Keep _qwen_prepare_chat_messages_inplace alive and call it through the
transport when sanitization already deepcopied (avoids a second deepcopy).
- Skip the back-compat SimpleNamespace shim in the main normalize loop — for
chat_completions, response.choices[0].message is already the right shape
with .content/.tool_calls/.reasoning/.reasoning_content/.reasoning_details
and per-tool-call .extra_content from the OpenAI SDK.
run_agent.py: -239 lines in _build_api_kwargs default branch extracted to the
transport. build_kwargs now owns: codex-field sanitization, Qwen portal prep,
developer role swap, provider preferences, max_tokens resolution (ephemeral >
user > NVIDIA 16384 > Qwen 65536 > Kimi 32000 > anthropic_max_output), Kimi
reasoning_effort + extra_body.thinking, OpenRouter/Nous/GitHub reasoning,
Nous product attribution tags, Ollama num_ctx, custom-provider think=false,
Qwen vl_high_resolution_images, request_overrides.
39 new transport tests (8 build_kwargs, 5 Kimi, 4 validate, 4 normalize
including extra_content regression, 3 cache stats, 3 basic). Tests/run_agent/
targeted suite passes (885/885 + 15 skipped; the 1 remaining failure is the
test_concurrent_interrupt flake present on origin/main).
Wire the auxiliary client (compaction, vision, session search, web extract)
to the Nous Portal's curated recommended-models endpoint when running on
Nous Portal, with a TTL-cached fetch that mirrors how we pull /models for
pricing.
hermes_cli/models.py
- fetch_nous_recommended_models(portal_base_url, force_refresh=False)
10-minute TTL cache, keyed per portal URL (staging vs prod don't
collide). Public endpoint, no auth required. Returns {} on any
failure so callers always get a dict.
- get_nous_recommended_aux_model(vision, free_tier=None, ...)
Tier-aware pick from the payload:
- Paid tier → paidRecommended{Vision,Compaction}Model, falling back
to freeRecommended* when the paid field is null (common during
staged rollouts of new paid models).
- Free tier → freeRecommended* only, never leaks paid models.
When free_tier is None, auto-detects via the existing
check_nous_free_tier() helper (already cached 3 min against
/api/oauth/account). Detection errors default to paid so we never
silently downgrade a paying user.
agent/auxiliary_client.py — _try_nous()
- Replaces the hardcoded xiaomi/mimo free-tier branch with a single call
to get_nous_recommended_aux_model(vision=vision).
- Falls back to _NOUS_MODEL (google/gemini-3-flash-preview) when the
Portal is unreachable or returns a null recommendation.
- The Portal is now the source of truth for aux model selection; the
xiaomi allowlist we used to carry is effectively dead.
Tests (15 new)
- tests/hermes_cli/test_models.py::TestNousRecommendedModels
Fetch caching, per-portal keying, network failure, force_refresh;
paid-prefers-paid, paid-falls-to-free, free-never-leaks-paid,
auto-detect, detection-error → paid default, null/blank modelName
handling.
- tests/agent/test_auxiliary_client.py::TestNousAuxiliaryRefresh
_try_nous honors Portal recommendation for text + vision, falls
back to google/gemini-3-flash-preview on None or exception.
Behavior won't visibly change today — both tier recommendations currently
point at google/gemini-3-flash-preview — but the moment the Portal ships
a better paid recommendation, subscribers pick it up within 10 minutes
without a Hermes release.
Drop _NOUS_ALLOWED_FREE_MODELS + filter_nous_free_models and its two call
sites. Whatever Nous Portal prices as free now shows up in the picker as-is
— no local allowlist gatekeeping. Free-tier partitioning (paid vs free in
the menu) still runs via partition_nous_models_by_tier.
Add ResponsesApiTransport wrapping codex_responses_adapter.py behind the
ProviderTransport ABC. Auto-registered via _discover_transports().
Wire ALL Codex transport methods to production paths in run_agent.py:
- build_kwargs: main _build_api_kwargs codex branch (50 lines extracted)
- normalize_response: main loop + flush + summary + retry (4 sites)
- convert_tools: memory flush tool override
- convert_messages: called internally via build_kwargs
- validate_response: response validation gate
- preflight_kwargs: request sanitization (2 sites)
Remove 7 dead legacy wrappers from AIAgent (_responses_tools,
_chat_messages_to_responses_input, _normalize_codex_response,
_preflight_codex_api_kwargs, _preflight_codex_input_items,
_extract_responses_message_text, _extract_responses_reasoning_text).
Keep 3 ID manipulation methods still used by _build_assistant_message.
Update 18 test call sites across 3 test files to call adapter functions
directly instead of through deleted AIAgent wrappers.
24 new tests. 343 codex/responses/transport tests pass (0 failures).
PR 4 of the provider transport refactor.
Follow-ups after salvaging xiaoqiang243's kimi-for-coding patches:
- KIMI_CODE_BASE_URL: drop trailing /v1 (was /coding/v1).
The /coding endpoint speaks Anthropic Messages, and the Anthropic SDK
appends /v1/messages internally. /coding/v1 + SDK suffix produced
/coding/v1/v1/messages (a 404). /coding + SDK suffix now yields
/coding/v1/messages correctly.
- kimi-coding ProviderConfig: keep legacy default api.moonshot.ai/v1 so
non-sk-kimi- moonshot keys still authenticate. sk-kimi- keys are
already redirected to api.kimi.com/coding via _resolve_kimi_base_url.
- doctor.py: update Kimi UA to claude-code/0.1.0 (was KimiCLI/1.30.0)
and rewrite /coding base URLs to /coding/v1 for the /models health
check (Anthropic surface has no /models).
- test_kimi_env_vars: accept KIMI_CODING_API_KEY as a secondary env var.
E2E verified:
sk-kimi-<key> → https://api.kimi.com/coding/v1/messages (Anthropic)
sk-<legacy> → https://api.moonshot.ai/v1/chat/completions (OpenAI)
UA: claude-code/0.1.0, x-api-key: <sk-kimi-*>
The CLI has no attachment channel — MEDIA:<path> tags are only
intercepted on messaging gateway platforms (Telegram, Discord,
Slack, WhatsApp, Signal, BlueBubbles, email, etc.). On the CLI
they render as literal text, which is confusing for users.
The CLI platform hint was the one PLATFORM_HINTS entry that said
nothing about file delivery, so models trained on the messaging
hints would default to MEDIA: tags on the CLI too. Tool schemas
(browser_tool, tts_tool, etc.) also recommend MEDIA: generically.
Extend the CLI hint to explicitly discourage MEDIA: tags and tell
the agent to reference files by plain absolute path instead.
Add a regression test asserting the CLI hint carries negative
guidance about MEDIA: while messaging hints keep positive guidance.
A single global MAX_TEXT_LENGTH = 4000 truncated every TTS provider at
4000 chars, causing long inputs to be silently chopped even though the
underlying APIs allow much more:
- OpenAI: 4096
- xAI: 15000
- MiniMax: 10000
- ElevenLabs: 5000 / 10000 / 30000 / 40000 (model-aware)
- Gemini: ~5000
- Edge: ~5000
The schema description also told the model 'Keep under 4000 characters',
which encouraged the agent to self-chunk long briefs into multiple TTS
calls (producing 3 separate audio files instead of one).
New behavior:
- PROVIDER_MAX_TEXT_LENGTH table + ELEVENLABS_MODEL_MAX_TEXT_LENGTH
encode the documented per-provider limits.
- _resolve_max_text_length(provider, cfg) resolves:
1. tts.<provider>.max_text_length user override
2. ElevenLabs model_id lookup
3. provider default
4. 4000 fallback
- text_to_speech_tool() and stream_tts_to_speaker() both call the
resolver; old MAX_TEXT_LENGTH alias kept for back-compat.
- Schema description no longer hardcodes 4000.
Tests: 27 new unit + E2E tests; all 53 existing TTS tests and 253
voice-command/voice-cli tests still pass.
Follow-up on #13724: showing literally every source was too noisy.\n\n now fetches a wider window (, larger limit) and then filters to a curated allowlist of human-facing sources (tui/cli plus chat adapters like telegram/discord/slack/whatsapp/etc). This keeps row #7 fixed (telegram sessions visible in /resume) without surfacing internal source kinds such as tool/acp.
* feat(models): hide OpenRouter models that don't advertise tool support
Port from Kilo-Org/kilocode#9068.
hermes-agent is tool-calling-first — every provider path assumes the
model can invoke tools. Models whose OpenRouter supported_parameters
doesn't include 'tools' (e.g. image-only or completion-only models)
cannot be driven by the agent loop and fail at the first tool call.
Filter them out of fetch_openrouter_models() so they never appear in
the model picker (`hermes model`, setup wizard, /model slash command).
Permissive when the field is missing — OpenRouter-compatible gateways
(Nous Portal, private mirrors, older snapshots) don't always populate
supported_parameters. Treat missing as 'unknown → allow' rather than
silently emptying the picker on those gateways. Only hide models
whose supported_parameters is an explicit list that omits tools.
Tests cover: tools present → kept, tools absent → dropped, field
missing → kept, malformed non-list → kept, non-dict item → kept,
empty list → dropped.
* feat(delegate): cross-agent file state coordination for concurrent subagents
Prevents mangled edits when concurrent subagents touch the same file
(same process, same filesystem — the mangle scenario from #11215).
Three layers, all opt-out via HERMES_DISABLE_FILE_STATE_GUARD=1:
1. FileStateRegistry (tools/file_state.py) — process-wide singleton
tracking per-agent read stamps and the last writer globally.
check_stale() names the sibling subagent in the warning when a
non-owning agent wrote after this agent's last read.
2. Per-path threading.Lock wrapped around the read-modify-write
region in write_file_tool and patch_tool. Concurrent siblings on
the same path serialize; different paths stay fully parallel.
V4A multi-file patches lock in sorted path order (deadlock-free).
3. Delegate-completion reminder in tools/delegate_tool.py: after a
subagent returns, writes_since(parent, child_start, parent_reads)
appends '[NOTE: subagent modified files the parent previously
read — re-read before editing: ...]' to entry.summary when the
child touched anything the parent had already seen.
Complements (does not replace) the existing path-overlap check in
run_agent._should_parallelize_tool_batch — batch check prevents
same-file parallel dispatch within one agent's turn (cheap prevention,
zero API cost), registry catches cross-subagent and cross-turn
staleness at write time (detection).
Behavior is warning-only, not hard-failing — matches existing project
style. Errors surface naturally: sibling writes often invalidate the
old_string in patch operations, which already errors cleanly.
Tests: tests/tools/test_file_state_registry.py — 16 tests covering
registry state transitions, per-path locking, per-path-not-global
locking, writes_since filtering, kill switch, and end-to-end
integration through the real read_file/write_file/patch handlers.
Reported during TUI v2 blitz retest: /resume modal only surfaced tui/cli
rows, even though `hermes --tui --resume <id>` with a pasted telegram
session id works fine. The handler double-fetched with explicit
`source="tui"` and `source="cli"` filters and dropped everything else on
the floor.
Drop the filter — list_sessions_rich(source=None) already excludes
child sessions (subagents, compression continuations) via its default,
and users want to resume messenger sessions from inside the TUI.
Adds gateway regression coverage.
Two unit tests that pin down the threading.local semantics the CLI freeze
fix (#13617 / #13618) relies on:
- main-thread registration must be invisible to child threads (documents
the underlying bug — if this ever starts passing visible, ACP's
GHSA-qg5c-hvr5-hjgr race has returned)
- child-thread registration must be visible from that same thread AND
cleared by the finally block (documents the fix pattern used by
cli.py's run_agent closure and acp_adapter/server.py)
Pairs with the fix in the preceding commit by @Societus.
Adds role='leaf'|'orchestrator' to delegate_task. With max_spawn_depth>=2,
an orchestrator child retains the 'delegation' toolset and can spawn its
own workers; leaf children cannot delegate further (identical to today).
Default posture is flat — max_spawn_depth=1 means a depth-0 parent's
children land at the depth-1 floor and orchestrator role silently
degrades to leaf. Users opt into nested delegation by raising
max_spawn_depth to 2 or 3 in config.yaml.
Also threads acp_command/acp_args through the main agent loop's delegate
dispatch (previously silently dropped in the schema) via a new
_dispatch_delegate_task helper, and adds a DelegateEvent enum with
legacy-string back-compat for gateway/ACP/CLI progress consumers.
Config (hermes_cli/config.py defaults):
delegation.max_concurrent_children: 3 # floor-only, no upper cap
delegation.max_spawn_depth: 1 # 1=flat (default), 2-3 unlock nested
delegation.orchestrator_enabled: true # global kill switch
Salvaged from @pefontana's PR #11215. Overrides vs. the original PR:
concurrency stays at 3 (PR bumped to 5 + cap 8 — we keep the floor only,
no hard ceiling); max_spawn_depth defaults to 1 (PR defaulted to 2 which
silently enabled one level of orchestration for every user).
Co-authored-by: pefontana <fontana.pedro93@gmail.com>
The prior form of this test asserted on CLI_CONFIG["delegation"] after
importing cli, which only passed by accident of pytest-xdist worker
scheduling. cli._hermes_home is frozen at module import time (cli.py:76),
before the tests/conftest.py autouse HERMES_HOME-isolation fixture can
fire, so CLI_CONFIG ends up populated by deep-merging the contributor's
actual ~/.hermes/config.yaml over the defaults (cli.py:359-366). Any
contributor (like me) who still has the legacy key set in their own
config causes a false failure the moment another test file in the same
xdist worker imports cli at module level.
Asserting on the source of load_cli_config() instead sidesteps all of
that: the test now checks the defaults literal directly and is
independent of user config, HERMES_HOME, import order, and worker
scheduling.
Demonstrated failure mode before this fix:
pytest tests/hermes_cli/test_config_drift.py \
tests/hermes_cli/test_skills_hub.py -o addopts=""
-> FAILED (CLI_CONFIG["delegation"] contained "default_toolsets"
from the user's ~/.hermes/config.yaml)
Part of Initiative 2 / M0.5.
delegation.default_toolsets was declared in cli.py's CLI_CONFIG default
dict and documented in cli-config.yaml.example, but never read: none of
tools/delegate_tool.py, _load_config(), or any call site ever looked it
up. The live fallback is the DEFAULT_TOOLSETS module constant at
tools/delegate_tool.py:101, which stays as-is.
hermes_cli/config.py's DEFAULT_CONFIG["delegation"] already omits the
key — this commit aligns cli.py with that.
Adds a regression test in tests/hermes_cli/test_config_drift.py so a
future refactor that re-adds the key without wiring it up to
_load_config() fails loudly.
Part of Initiative 2 / M0.5.
Adds OpenAI's new GPT Image 2 model via FAL.ai, selectable through
`hermes tools` → Image Generation. SOTA text rendering (including CJK)
and world-aware photorealism.
- FAL_MODELS entry with image_size_preset style
- 4:3 presets on all aspect ratios — 16:9 (1024x576) falls below
GPT-Image-2's 655,360 min-pixel floor and would be rejected
- quality pinned to medium (same rule as gpt-image-1.5) for
predictable Nous Portal billing
- BYOK (openai_api_key) deliberately omitted from supports so all
users stay on shared FAL billing
- 6 new tests covering preset mapping, quality pinning, and
supports-whitelist integrity
- Docs table + aspect-ratio map updated
Live-tested end-to-end: 39.9s cold request, clean 1024x768 PNG
The [Replying to: "..."] prefix is disambiguation, not deduplication. When
a user explicitly replies to a prior message, the agent needs a pointer to
which specific message they're referencing — even when the quoted text
already exists somewhere in history. History can contain the same or
similar text multiple times; without an explicit pointer the agent has to
guess (or answer for both subjects), and the reply signal is silently
dropped.
Example: in a conversation comparing Japan and Italy, replying to the
"Japan is great for culture..." message and asking "What's the best time
to go?" — previously the found_in_history check suppressed the prefix
because the quoted text was already in history, leaving the agent to
guess which destination the user meant. Now the pointer is always present.
Drops the found_in_history guard added in #1594. Token overhead is
minimal (snippet capped at 500 chars on the new user turn; cached prefix
unaffected). Behavior becomes deterministic: reply sent ⇒ pointer present.
Thanks to smartyi for flagging this.
Reported during TUI v2 blitz testing: typing `@folder:` in the composer
pulled up .dockerignore, .env, .gitignore, and every other file in the
cwd alongside the actual directories. The completion loop yielded every
entry regardless of the explicit prefix and auto-rewrote each completion
to @file: vs @folder: based on is_dir — defeating the user's choice.
Also fixed a pre-existing adjacent bug: a bare `@file:` or `@folder:`
(no path) used expanded=="." as both search_dir AND match_prefix,
filtering the list to dotfiles only. When expanded is empty or ".",
search in cwd with no prefix filter.
- want_dir = prefix == "@folder:" drives an explicit is_dir filter
- preserve the typed prefix in completion text instead of rewriting
- three regression tests cover: folder-only, file-only, and the bare-
prefix case where completions keep the `@folder:` prefix
Reported during the TUI v2 blitz test: switching from openrouter to
anthropic via `/model <name> --provider anthropic` appeared to succeed,
but the next turn kept hitting openrouter — the provider the user was
deliberately moving away from.
Two gaps caused this:
1. `Agent.switch_model` reset `_fallback_activated` / `_fallback_index`
but left `_fallback_chain` intact. The chain was seeded from
`fallback_providers:` at agent init for the *original* primary, so
when the new primary returned 401 (invalid/expired Anthropic key),
`_try_activate_fallback()` picked the old provider back up without
informing the user. Prune entries matching either the old primary
(user is moving away) or the new primary (redundant) whenever the
primary provider actually changes.
2. `_apply_model_switch` persisted `HERMES_MODEL` but never updated
`HERMES_INFERENCE_PROVIDER`. Any ambient re-resolution of the runtime
(credential pool refresh, compressor rebuild, aux clients) falls
through to that env var in `resolve_requested_provider`, so it kept
reporting the original provider even after an in-memory switch.
Adds three regression tests: fallback-chain prune on primary change,
no-op on same-provider model swap, and env-var sync on explicit switch.
Qwen models on OpenCode, OpenCode Go, and direct DashScope accept
Anthropic-style cache_control markers on OpenAI-wire chat completions,
but hermes only injected markers for Claude-named models. Result: zero
cache hits on every turn, full prompt re-billed — a community user
reported burning through their OpenCode Go subscription on Qwen3.6.
Extend _anthropic_prompt_cache_policy to return (True, False) — envelope
layout, not native — for the Alibaba provider family when the model name
contains 'qwen'. Envelope layout places markers on inner content blocks
(matching pi-mono's 'alibaba' cacheControlFormat) and correctly skips
top-level markers on tool-role messages (which OpenCode rejects).
Non-Qwen models on these providers (GLM, Kimi) keep their existing
behaviour — they have automatic server-side caching and don't need
client markers.
Upstream reference: pi-mono #3392 / #3393 documented this contract for
opencode-go Qwen models.
Adds 7 regression tests covering Qwen3.5/3.6/coder on each affected
provider plus negative cases for GLM/Kimi/OpenRouter-Qwen.
DNS rebinding attack: a victim browser that has the dashboard (or the
WhatsApp bridge) open could be tricked into fetching from an
attacker-controlled hostname that TTL-flips to 127.0.0.1. Same-origin
and CORS checks don't help — the browser now treats the attacker origin
as same-origin with the local service. Validating the Host header at
the app layer rejects any request whose Host isn't one we bound for.
Changes:
hermes_cli/web_server.py:
- New host_header_middleware runs before auth_middleware. Reads
app.state.bound_host (set by start_server) and rejects requests
whose Host header doesn't match the bound interface with HTTP 400.
- Loopback binds accept localhost / 127.0.0.1 / ::1. Non-loopback
binds require exact match. 0.0.0.0 binds skip the check (explicit
--insecure opt-in; no app-layer defence possible).
- IPv6 bracket notation parsed correctly: [::1] and [::1]:9119 both
accepted.
scripts/whatsapp-bridge/bridge.js:
- Express middleware rejects non-loopback Host headers. Bridge
already binds 127.0.0.1-only, this adds the complementary app-layer
check for DNS rebinding defence.
Tests: 8 new in tests/hermes_cli/test_web_server_host_header.py
covering loopback/non-loopback/zero-zero binds, IPv6 brackets, case
insensitivity, and end-to-end middleware rejection via TestClient.
Reported in GHSA-ppp5-vxwm-4cf7 by @bupt-Yy-young. Hardening — not
CVE per SECURITY.md §3. The dashboard's main trust boundary is the
loopback bind + session token; DNS rebinding defeats the bind assumption
but not the token (since the rebinding browser still sees a first-party
fetch to 127.0.0.1 with the token-gated API). Host-header validation
adds the missing belt-and-braces layer.
When TELEGRAM_WEBHOOK_URL was set but TELEGRAM_WEBHOOK_SECRET was not,
python-telegram-bot received secret_token=None and the webhook endpoint
accepted any HTTP POST. Anyone who could reach the listener could inject
forged updates — spoofed user IDs, spoofed chat IDs, attacker-controlled
message text — and trigger handlers as if Telegram delivered them.
The fix refuses to start the adapter in webhook mode without the secret.
Polling mode (default, no webhook URL) is unaffected — polling is
authenticated by the bot token directly.
BREAKING CHANGE for webhook-mode deployments that never set
TELEGRAM_WEBHOOK_SECRET. The error message explains remediation:
export TELEGRAM_WEBHOOK_SECRET="$(openssl rand -hex 32)"
and instructs registering it with Telegram via setWebhook's secret_token
parameter. Release notes must call this out.
Reported in GHSA-3vpc-7q5r-276h by @bupt-Yy-young. Hardening — not CVE
per SECURITY.md §3 "Public Exposure: Deploying the gateway to the
public internet without external authentication or network protection"
covers the historical default, but shipping a fail-open webhook as the
default was the wrong choice and the guard aligns us with the SECURITY.md
threat model.
Two related ACP approval issues:
GHSA-96vc-wcxf-jjff — ACP's _run_agent never set HERMES_INTERACTIVE
(or any other flag recognized by tools.approval), so check_all_command_guards
took the non-interactive auto-approve path and never consulted the
ACP-supplied approval callback (conn.request_permission). Dangerous
commands executed in ACP sessions without operator approval despite
the callback being installed. Fix: set HERMES_INTERACTIVE=1 around
the agent run so check_all_command_guards routes through
prompt_dangerous_approval(approval_callback=...) — the correct shape
for ACP's per-session request_permission call. HERMES_EXEC_ASK would
have routed through the gateway-queue path instead, which requires a
notify_cb registered in _gateway_notify_cbs (not applicable to ACP).
GHSA-qg5c-hvr5-hjgr — _approval_callback and _sudo_password_callback
were module-level globals in terminal_tool. Concurrent ACP sessions
running in ThreadPoolExecutor threads each installed their own callback
into the same slot, racing. Fix: store both callbacks in threading.local()
so each thread has its own slot. CLI mode (single thread) is unaffected;
gateway mode uses a separate queue-based approval path and was never
touched.
set_approval_callback is now called INSIDE _run_agent (the executor
thread) rather than before dispatching — so the TLS write lands on the
correct thread.
Tests: 5 new in tests/acp/test_approval_isolation.py covering
thread-local isolation of both callbacks and the HERMES_INTERACTIVE
callback routing. Existing tests/acp/ (159 tests) and tests/tools/
approval-related tests continue to pass.
Fixes GHSA-96vc-wcxf-jjff
Fixes GHSA-qg5c-hvr5-hjgr
A skill declaring `required_environment_variables: [ANTHROPIC_TOKEN]` in
its SKILL.md frontmatter silently bypassed the `execute_code` sandbox's
credential-scrubbing guarantee. `register_env_passthrough` had no
blocklist, so any name a skill chose flipped `is_env_passthrough(name) =>
True`, which shortcircuits the sandbox's secret filter.
Fix: reject registration when the name appears in
`_HERMES_PROVIDER_ENV_BLOCKLIST` (the canonical list of Hermes-managed
credentials — provider keys, gateway tokens, etc.). Log a warning naming
GHSA-rhgp-j443-p4rf so operators see the rejection in logs.
Non-Hermes third-party API keys (TENOR_API_KEY for gif-search,
NOTION_TOKEN for notion skills, etc.) remain legitimately registerable —
they were never in the sandbox scrub list in the first place.
Tests: 16 -> 17 passing. Two old tests that documented the bypass
(`test_passthrough_allows_blocklisted_var`, `test_make_run_env_passthrough`)
are rewritten to assert the new fail-closed behavior. New
`test_non_hermes_api_key_still_registerable` locks in that legitimate
third-party keys are unaffected.
Reported in GHSA-rhgp-j443-p4rf by @q1uf3ng. Hardening; not CVE-worthy
on its own per the decision matrix (attacker must already have operator
consent to install a malicious skill).
Two call sites still used a raw substring check to identify ollama.com:
hermes_cli/runtime_provider.py:496:
_is_ollama_url = "ollama.com" in base_url.lower()
run_agent.py:6127:
if fb_base_url_hint and "ollama.com" in fb_base_url_hint.lower() ...
Same bug class as GHSA-xf8p-v2cg-h7h5 (OpenRouter substring leak), which
was fixed in commit dbb7e00e via base_url_host_matches() across the
codebase. The earlier sweep missed these two Ollama sites. Self-discovered
during April 2026 security-advisory triage; filed as GHSA-76xc-57q6-vm5m.
Impact is narrow — requires a user with OLLAMA_API_KEY configured AND a
custom base_url whose path or look-alike host contains 'ollama.com'.
Users on default provider flows are unaffected. Filed as a draft advisory
to use the private-fork flow; not CVE-worthy on its own.
Fix is mechanical: replace substring check with base_url_host_matches
at both sites. Same helper the rest of the codebase uses.
Tests: 67 -> 71 passing. 7 new host-matcher cases in
tests/test_base_url_hostname.py (path injection, lookalike host,
localtest.me subdomain, ollama.ai TLD confusion, localhost, genuine
ollama.com, api.ollama.com subdomain) + 4 call-site tests in
tests/hermes_cli/test_runtime_provider_resolution.py verifying
OLLAMA_API_KEY is selected only when base_url actually targets
ollama.com.
Fixes GHSA-76xc-57q6-vm5m
- Replace kwargs.get('limit', 50) with module-level _LIST_SESSIONS_PAGE_SIZE
constant. ListSessionsRequest schema has no 'limit' field, so the kwarg
path was dead. Constant is the single source of truth for the page cap.
- Use next_cursor= (field name) instead of nextCursor= (alias). Both work
under the schema's populate_by_name config, but using the declared
Python field name is the consistent style in this file.
- Add docstring explaining cwd pass-through and cursor semantics.
- Add 4 tests: first-page with next_cursor, single-page no next_cursor,
cursor resumes after match, unknown cursor returns empty page.
The original tests replicated the try/except/cancel/raise pattern inline with
a mocked future, which tested Python's try/except semantics rather than the
scheduler's behavior. Rewrite them to invoke _deliver_result and
_send_media_via_adapter end-to-end with a real concurrent.futures.Future
whose .result() raises TimeoutError.
Mutation-verified: both tests fail when the try/except wrappers are removed
from cron/scheduler.py, pass with them in place.
When the live adapter delivery path (_deliver_result) or media send path
(_send_media_via_adapter) times out at future.result(timeout=N), the
underlying coroutine scheduled via asyncio.run_coroutine_threadsafe can
still complete on the event loop, causing a duplicate send after the
standalone fallback runs.
Cancel the future on TimeoutError before re-raising, so the standalone
fallback is the sole delivery path.
Adds TestDeliverResultTimeoutCancelsFuture and
TestSendMediaTimeoutCancelsFuture.
Kimi/Moonshot endpoints require explicit parameters that Hermes was not
sending, causing 'Response truncated due to output length limit' errors
and inconsistent reasoning behavior.
Root cause analysis against Kimi CLI source (MoonshotAI/kimi-cli,
packages/kosong/src/kosong/chat_provider/kimi.py):
1. max_tokens: Kimi's API defaults to a very low value when omitted.
Reasoning tokens share the output budget — the model exhausts it on
thinking alone. Send 32000, matching Kimi CLI's generate() default.
2. reasoning_effort: Kimi CLI sends this as a top-level parameter (not
inside extra_body). Hermes was not sending it at all because
_supports_reasoning_extra_body() returns False for non-OpenRouter
endpoints.
3. extra_body.thinking: Kimi CLI uses with_thinking() which sets
extra_body.thinking={"type":"enabled"} alongside reasoning_effort.
This is a separate control from the OpenAI-style reasoning extra_body
that Hermes sends for OpenRouter/GitHub. Without it, the Kimi gateway
may not activate reasoning mode correctly.
Covers api.kimi.com (Kimi Code) and api.moonshot.ai/cn (Moonshot).
Tests: 6 new test cases for max_tokens, reasoning_effort, and
extra_body.thinking under various configs.
Gateway /model <name> --provider opencode-go (or any provider whose /models
endpoint is down, 404s, or doesn't exist) silently failed. validate_requested_model
returned accepted=False whenever fetch_api_models returned None, switch_model
returned success=False, and the gateway never wrote _session_model_overrides —
so the switch appeared to succeed in the error message flow but the next turn
kept calling the old provider.
The validator already had static-catalog fallbacks for MiniMax and Codex
(providers without a /models endpoint). Extended the same pattern as the
terminal fallback: when the live probe fails, consult provider_model_ids()
for the curated catalog. Known models → accepted+recognized. Close typos →
auto-corrected. Unknown models → soft-accepted with a 'Not in curated
catalog' warning. Providers with no catalog at all → soft-accepted with a
generic 'Note:' warning, finally honoring the in-code comment ('Accept and
persist, but warn') that had been lying since it was written.
Tests: 7 new tests in test_opencode_go_validation_fallback.py covering the
catalog lookup, case-insensitive match, auto-correct, unknown-with-suggestion,
unknown-without-suggestion, and no-catalog paths. TestValidateApiFallback in
test_model_validation.py updated — its four 'rejected_when_api_down' tests
were encoding exactly the bug being fixed.
The MCP circuit breaker in tools/mcp_tool.py has no half-open state and
no reset-on-reconnect behavior, so once it trips after 3 consecutive
failures it stays tripped for the process lifetime. These tests lock
in the intended recovery behavior:
1. test_circuit_breaker_half_opens_after_cooldown — after the cooldown
elapses, the next call must actually probe the session; success
closes the breaker.
2. test_circuit_breaker_reopens_on_probe_failure — a failed probe
re-arms the cooldown instead of letting every subsequent call
through.
3. test_circuit_breaker_cleared_on_reconnect — a successful OAuth
recovery resets the breaker even if the post-reconnect retry
fails (a successful reconnect is sufficient evidence the server
is viable again).
All three currently fail, as expected.
* feat(models): hide OpenRouter models that don't advertise tool support
Port from Kilo-Org/kilocode#9068.
hermes-agent is tool-calling-first — every provider path assumes the
model can invoke tools. Models whose OpenRouter supported_parameters
doesn't include 'tools' (e.g. image-only or completion-only models)
cannot be driven by the agent loop and fail at the first tool call.
Filter them out of fetch_openrouter_models() so they never appear in
the model picker (`hermes model`, setup wizard, /model slash command).
Permissive when the field is missing — OpenRouter-compatible gateways
(Nous Portal, private mirrors, older snapshots) don't always populate
supported_parameters. Treat missing as 'unknown → allow' rather than
silently emptying the picker on those gateways. Only hide models
whose supported_parameters is an explicit list that omits tools.
Tests cover: tools present → kept, tools absent → dropped, field
missing → kept, malformed non-list → kept, non-dict item → kept,
empty list → dropped.
* refactor(acp): validate method_id against advertised provider in authenticate()
Previously authenticate() accepted any method_id whenever the server had
provider credentials configured. This was not a vulnerability under the
personal-assistant trust model (ACP is stdio-only, local-trust — anything
that can reach the transport is already code-execution-equivalent to the
user), but it was sloppy API hygiene: the advertised auth_methods list
from initialize() was effectively ignored.
Now authenticate() only returns AuthenticateResponse when method_id
matches the currently-advertised provider (case-insensitive). Mismatched
or missing method_id returns None, consistent with the no-credentials
case.
Raised by xeloxa via GHSA-g5pf-8w9m-h72x. Declined as a CVE
(ACP transport is stdio, local-trust model), but the correctness fix is
worth having on its own.
Current main's _message_mentions_bot() uses MessageEntity-only detection
(commit e330112a), so the test for '/status@hermes_bot' needs to include
a MENTION entity. Real Telegram always emits one for /cmd@botname — the
bot menu and CommandHandler rely on this mechanism.
When require_mention is enabled, slash commands no longer bypass
mention checks. Bare /command without @mention is filtered in groups,
while /command@botname (bot menu) and @botname /command still pass.
Commands still pass unconditionally when require_mention is disabled,
preserving backward compatibility.
Closes#6033
Treat whitespace-only FAL_KEY the same as unset so users who export
FAL_KEY=" " (or CI that leaves a blank token) get the expected
'not set' error path instead of a confusing downstream fal_client
failure.
Applied to the two direct FAL_KEY checks in image_generation_tool.py:
image_generate_tool's upfront credential check and check_fal_api_key().
Both keep the existing managed-gateway fallback intact.
Adapted the original whitespace/valid tests to pin the managed gateway
to None so the whitespace assertion exercises the direct-key path
rather than silently relying on gateway absence.
Follow-ups on top of @teyrebaz33's cherry-picked commit:
1. New shared helper format_no_match_hint() in fuzzy_match.py with a
startswith('Could not find') gate so the snippet only appends to
genuine no-match errors — not to 'Found N matches' (ambiguous),
'Escape-drift detected', or 'identical strings' errors, which would
all mislead the model.
2. file_tools.patch_tool suppresses the legacy generic '[Hint: old_string
not found...]' string when the rich 'Did you mean?' snippet is
already attached — no more double-hint.
3. Wire the same helper into patch_parser.py (V4A patch mode, both
_validate_operations and _apply_update) and skill_manager_tool.py so
all three fuzzy callers surface the hint consistently.
Tests: 7 new gating tests in TestFormatNoMatchHint cover every error
class (ambiguous, drift, identical, non-zero match count, None error,
no similar content, happy path). 34/34 test_fuzzy_match, 96/96
test_file_tools + test_patch_parser + test_skill_manager_tool pass.
E2E verified across all four scenarios: no-match-with-similar,
no-match-no-similar, ambiguous, success. V4A mode confirmed
end-to-end with a non-matching hunk.
When patch_replace() cannot find old_string in a file, the error message
now includes the closest matching lines from the file with line numbers
and context. This helps the LLM self-correct without a separate read_file
call.
Implements Phase 1 of #536: enhanced patch error feedback with no
architectural changes.
- tools/fuzzy_match.py: new find_closest_lines() using SequenceMatcher
- tools/file_operations.py: attach closest-lines hint to patch errors
- tests/tools/test_fuzzy_match.py: 5 new tests for find_closest_lines
OpenCode Go's published model list (opencode.ai/docs/go) includes kimi-k2.6,
qwen3.5-plus, and qwen3.6-plus, but Hermes' curated lists didn't carry them.
When the live /models probe fails during `hermes model`, users fell back to
the stale curated list and had to type newer models via 'Enter custom model
name'.
Adds kimi-k2.6 (now first in the Go list), qwen3.6-plus, and qwen3.5-plus
to both the model picker (hermes_cli/models.py) and setup defaults
(hermes_cli/setup.py). All routed through the existing opencode-go
chat_completions path — no api_mode changes needed.
Wires the agent/account_usage module from the preceding commit into
/usage so users see provider-side quota/credit info alongside the
existing session token report.
CLI:
- `_show_usage` appends account lines under the token table. Fetch
runs in a 1-worker ThreadPoolExecutor with a 10s timeout so a slow
provider API can never hang the prompt.
Gateway:
- `_handle_usage_command` resolves provider from the live agent when
available, else from the persisted billing_provider/billing_base_url
on the SessionDB row, so /usage still returns account info between
turns when no agent is resident. Fetch runs via asyncio.to_thread.
- Account section is appended to all three return branches: running
agent, no-agent-with-history, and the new no-agent-no-history path
(falls back to account-only output instead of "no data").
Tests:
- 2 new tests in tests/gateway/test_usage_command.py cover the live-
agent account section and the persisted-billing fallback path.
Salvaged from PR #2486 by @kshitijk4poor. The original branch had
drifted ~2615 commits behind main and rewrote _show_usage wholesale,
which would have dropped the rate-limit and cached-agent blocks added
in PRs #6541 and #7038. This commit re-adds only the new behavior on
top of current main.
Ports agent/account_usage.py and its tests from the original PR #2486
branch. Defines AccountUsageSnapshot / AccountUsageWindow dataclasses,
a shared renderer, and provider-specific fetchers for OpenAI Codex
(wham/usage), Anthropic OAuth (oauth/usage), and OpenRouter (/credits
and /key). Wiring into /usage lands in a follow-up salvage commit.
Authored-by: kshitijk4poor <82637225+kshitijk4poor@users.noreply.github.com>
Every credential source Hermes reads from now behaves identically on
`hermes auth remove`: the pool entry stays gone across fresh load_pool()
calls, even when the underlying external state (env var, OAuth file,
auth.json block, config entry) is still present.
Before this, auth_remove_command was a 110-line if/elif with five
special cases, and three more sources (qwen-cli, copilot, custom
config) had no removal handler at all — their pool entries silently
resurrected on the next invocation. Even the handled cases diverged:
codex suppressed, anthropic deleted-without-suppressing, nous cleared
without suppressing. Each new provider added a new gap.
What's new:
agent/credential_sources.py — RemovalStep registry, one entry per
source (env, claude_code, hermes_pkce, nous device_code, codex
device_code, qwen-cli, copilot gh_cli + env vars, custom config).
auth_remove_command dispatches uniformly via find_removal_step().
Changes elsewhere:
agent/credential_pool.py — every upsert in _seed_from_env,
_seed_from_singletons, and _seed_custom_pool now gates on
is_source_suppressed(provider, source) via a shared helper.
hermes_cli/auth_commands.py — auth_remove_command reduced to 25
lines of dispatch; auth_add_command now clears ALL suppressions for
the provider on re-add (was env:* only).
Copilot is special: the same token is seeded twice (gh_cli via
_seed_from_singletons + env:<VAR> via _seed_from_env), so removing one
entry without suppressing the other variants lets the duplicate
resurrect. The copilot RemovalStep suppresses gh_cli + all three env
variants (COPILOT_GITHUB_TOKEN, GH_TOKEN, GITHUB_TOKEN) at once.
Tests: 11 new unit tests + 4059 existing pass. 12 E2E scenarios cover
every source in isolated HERMES_HOME with simulated fresh processes.
Removing an env-seeded credential only cleared ~/.hermes/.env and the
current process's os.environ, leaving shell-exported vars (shell profile,
systemd EnvironmentFile, launchd plist) to resurrect the entry on the
next load_pool() call. This matched the pre-#11485 codex behaviour.
Now we suppress env:<VAR> in auth.json on remove, gate _seed_from_env()
behind is_source_suppressed(), clear env:* suppressions on auth add,
and print a diagnostic pointing at the shell when the var lives there.
Applies to every env:* seeded credential (xai, deepseek, moonshot, zai,
nvidia, openrouter, anthropic, etc.), not just xai.
Reported by @teknium1 from community user 'Artificial Brain' — couldn't
remove their xAI key via hermes auth remove.
Three fixes that close the remaining structural sources of CI flakes
after PR #13363.
## 1. Per-test reset of module-level singletons and ContextVars
Python modules are singletons per process, and pytest-xdist workers are
long-lived. Module-level dicts/sets and ContextVars persist across tests
on the same worker. A test that sets state in `tools.approval._session_approved`
and doesn't explicitly clear it leaks that state to every subsequent test
on the same worker.
New `_reset_module_state` autouse fixture in `tests/conftest.py` clears:
- tools.approval: _session_approved, _session_yolo, _permanent_approved,
_pending, _gateway_queues, _gateway_notify_cbs, _approval_session_key
- tools.interrupt: _interrupted_threads
- gateway.session_context: 10 session/cron ContextVars (reset to _UNSET)
- tools.env_passthrough: _allowed_env_vars_var (reset to empty set)
- tools.credential_files: _registered_files_var (reset to empty dict)
- tools.file_tools: _read_tracker, _file_ops_cache
This was the single biggest remaining class of CI flakes.
`test_command_guards::test_warn_session_approved` and
`test_combined_cli_session_approves_both` were failing 12/15 recent main
runs specifically because `_session_approved` carried approvals from a
prior test's session into these tests' `"default"` session lookup.
## 2. Unset platform allowlist env vars in hermetic fixture
`TELEGRAM_ALLOWED_USERS`, `DISCORD_ALLOWED_USERS`, and 20 other
`*_ALLOWED_USERS` / `*_ALLOW_ALL_USERS` vars are now unset per-test in
the same place credential env vars already are. These aren't credentials
but they change gateway auth behavior; if set from any source (user
shell, leaky test, CI env) they flake button-authorization tests.
Fixes three `test_telegram_approval_buttons` tests that were failing
across recent runs of the full gateway directory.
## 3. Two specific tests with module-level captured state
- `test_signal::TestSignalPhoneRedaction`: `agent.redact._REDACT_ENABLED`
is captured at module import from `HERMES_REDACT_SECRETS`, not read
per-call. `monkeypatch.delenv` at test time is too late. Added
`monkeypatch.setattr("agent.redact._REDACT_ENABLED", True)` per
skill xdist-cross-test-pollution Pattern 5.
- `test_internal_event_bypass_pairing::test_non_internal_event_without_user_triggers_pairing`:
`gateway.pairing.PAIRING_DIR` is captured at module import from
HERMES_HOME, so per-test HERMES_HOME redirection in conftest doesn't
retroactively move it. Test now monkeypatches PAIRING_DIR directly to
its tmp_path, preventing rate-limit state from prior xdist workers
from letting the pairing send-call be suppressed.
## Validation
- tests/tools/: 3494 pass (0 fail) including test_command_guards
- tests/gateway/: 3504 pass (0 fail) across repeat runs
- tests/agent/ + tests/hermes_cli/ + tests/run_agent/ + tests/tools/:
8371 pass, 37 skipped, 0 fail — full suite across directories
No production code changed.
file_safety now uses profile-aware get_hermes_home(), so the test
fixture must override HERMES_HOME too — otherwise it resolves to the
conftest's isolated tempdir and the hub-cache path doesn't match.
Builds on @AxDSan's PR #2109 to finish the KittenTTS wiring so the
provider behaves like every other TTS backend end to end.
- tools/tts_tool.py: `_check_kittentts_available()` helper and wire
into `check_tts_requirements()`; extend Opus-conversion list to
include kittentts (WAV → Opus for Telegram voice bubbles); point the
missing-package error at `hermes setup tts`.
- hermes_cli/tools_config.py: add KittenTTS entry to the "Text-to-Speech"
toolset picker, with a `kittentts` post_setup hook that auto-installs
the wheel + soundfile via pip.
- hermes_cli/setup.py: `_install_kittentts_deps()`, new choice + install
flow in `_setup_tts_provider()`, provider_labels entry, and status row
in the `hermes setup` summary.
- website/docs/user-guide/features/tts.md: add KittenTTS to the provider
table, config example, ffmpeg note, and the zero-config voice-bubble tip.
- tests/tools/test_tts_kittentts.py: 10 unit tests covering generation,
model caching, config passthrough, ffmpeg conversion, availability
detection, and the missing-package dispatcher branch.
E2E verified against the real `kittentts` wheel:
- WAV direct output (pcm_s16le, 24kHz mono)
- MP3 conversion via ffmpeg (from WAV)
- Telegram flow (provider in Opus-conversion list) produces
`codec_name=opus`, 48kHz mono, `voice_compatible=True`, and the
`[[audio_as_voice]]` marker
- check_tts_requirements() returns True when kittentts is installed
Generalize shared multi-user session handling so non-thread group sessions
(group_sessions_per_user=False) get the same treatment as shared threads:
inbound messages are prefixed with [sender name], and the session prompt
shows a multi-user note instead of pinning a single **User:** line into
the cached system prompt.
Before: build_session_key already treated these as shared sessions, but
_prepare_inbound_message_text and build_session_context_prompt only
recognized shared threads — creating cross-user attribution drift and
prompt-cache contamination in shared groups.
- Add is_shared_multi_user_session() helper alongside build_session_key()
so both the session key and the multi-user branches are driven by the
same rules (DMs never shared, threads shared unless
thread_sessions_per_user, groups shared unless group_sessions_per_user).
- Add shared_multi_user_session field to SessionContext, populated by
build_session_context() from config.
- Use context.shared_multi_user_session in the prompt builder (label is
'Multi-user thread' when a thread is present, 'Multi-user session'
otherwise).
- Use the helper in _prepare_inbound_message_text so non-thread shared
groups also get [sender] prefixes.
Default behavior unchanged: DMs stay single-user, groups with
group_sessions_per_user=True still show the user normally, shared threads
keep their existing multi-user behavior.
Tests (65 passed):
- tests/gateway/test_session.py: new shared non-thread group prompt case.
- tests/gateway/test_shared_group_sender_prefix.py: inbound preprocessing
for shared non-thread groups and default groups.
Sweep ~74 redundant local imports across 21 files where the same module
was already imported at the top level. Also includes type fixes and lint
cleanups on the same branch.
Follow-up on top of opriz's atomic PID file fix. The prior change caught
the race AFTER runner.start(), so the loser still opened Telegram polling
and Discord gateway sockets before detecting the conflict and exiting.
Hoist the PID-claim block to BEFORE runner.start(). Now the loser of the
O_CREAT|O_EXCL race returns from start_gateway() without ever bringing up
any platform adapter — no Telegram conflict, no Discord duplicate session.
Also add regression tests:
- test_write_pid_file_is_atomic_against_concurrent_writers: second
write_pid_file() raises FileExistsError rather than clobbering.
- Two existing replace-path tests updated to stateful mocks since the
real post-kill state (get_running_pid None after remove_pid_file)
is now exercised by the hoisted re-check.
* feat(skills): inject absolute skill dir and expand ${HERMES_SKILL_DIR} templates
When a skill loads, the activation message now exposes the absolute
skill directory and substitutes ${HERMES_SKILL_DIR} /
${HERMES_SESSION_ID} tokens in the SKILL.md body, so skills with
bundled scripts can instruct the agent to run them by absolute path
without an extra skill_view round-trip.
Also adds opt-in inline-shell expansion: !`cmd` snippets in SKILL.md
are pre-executed (with the skill directory as CWD) and their stdout is
inlined into the message before the agent reads it. Off by default —
enable via skills.inline_shell in config.yaml — because any snippet
runs on the host without approval.
Changes:
- agent/skill_commands.py: template substitution, inline-shell
expansion, absolute skill-dir header, supporting-files list now
shows both relative and absolute forms.
- hermes_cli/config.py: new skills.template_vars,
skills.inline_shell, skills.inline_shell_timeout knobs.
- tests/agent/test_skill_commands.py: coverage for header, both
template tokens (present and missing session id), template_vars
disable, inline-shell default-off, enabled, CWD, and timeout.
- website/docs/developer-guide/creating-skills.md: documents the
template tokens, the absolute-path header, and the opt-in inline
shell with its security caveat.
Validation: tests/agent/ 1591 passed (includes 9 new tests).
E2E: loaded a real skill in an isolated HERMES_HOME; confirmed
${HERMES_SKILL_DIR} resolves to the absolute path, ${HERMES_SESSION_ID}
resolves to the passed task_id, !`date` runs when opt-in is set, and
stays literal when it isn't.
* feat(terminal): source ~/.bashrc (and user-listed init files) into session snapshot
bash login shells don't source ~/.bashrc, so tools that install themselves
there — nvm, asdf, pyenv, cargo, custom PATH exports — stay invisible to
the environment snapshot Hermes builds once per session. Under systemd
or any context with a minimal parent env, that surfaces as
'node: command not found' in the terminal tool even though the binary
is reachable from every interactive shell on the machine.
Changes:
- tools/environments/local.py: before the login-shell snapshot bootstrap
runs, prepend guarded 'source <file>' lines for each resolved init
file. Missing files are skipped, each source is wrapped with a
'[ -r ... ] && . ... || true' guard so a broken rc can't abort the
bootstrap.
- hermes_cli/config.py: new terminal.shell_init_files (explicit list,
supports ~ and ${VAR}) and terminal.auto_source_bashrc (default on)
knobs. When shell_init_files is set it takes precedence; when it's
empty and auto_source_bashrc is on, ~/.bashrc gets auto-sourced.
- tests/tools/test_local_shell_init.py: 10 tests covering the resolver
(auto-bashrc, missing file, explicit override, ~/${VAR} expansion,
opt-out) and the prelude builder (quoting, guarded sourcing), plus
a real-LocalEnvironment snapshot test that confirms exports in the
init file land in subsequent commands' environment.
- website/docs/reference/faq.md: documents the fix in Troubleshooting,
including the zsh-user pattern of sourcing ~/.zshrc or nvm.sh
directly via shell_init_files.
Validation: 10/10 new tests pass; tests/tools/test_local_*.py 40/40
pass; tests/agent/ 1591/1591 pass; tests/hermes_cli/test_config.py
50/50 pass. E2E in an isolated HERMES_HOME: confirmed that a fake
~/.bashrc setting a marker var and PATH addition shows up in a real
LocalEnvironment().execute() call, that auto_source_bashrc=false
suppresses it, that an explicit shell_init_files entry wins over the
auto default, and that a missing bashrc is silently skipped.
Catalog snapshots, config version literals, and enumeration counts are data
that changes as designed. Tests that assert on those values add no
behavioral coverage — they just break CI on every routine update and cost
engineering time to 'fix.'
Replace with invariants where one exists, delete where none does.
Deleted (pure snapshots):
- TestMinimaxModelCatalog (3 tests): 'MiniMax-M2.7 in models' et al
- TestGeminiModelCatalog: 'gemini-2.5-pro in models', 'gemini-3.x in models'
- test_browser_camofox_state::test_config_version_matches_current_schema
(docstring literally said it would break on unrelated bumps)
Relaxed (keep plumbing check, drop snapshot):
- Xiaomi / Arcee / Kimi moonshot / Kimi coding / HuggingFace static lists:
now assert 'provider exists and has >= 1 entry' instead of specific names
- HuggingFace main/models.py consistency test: drop 'len >= 6' floor
Dynamicized (follow source, not a literal):
- 3x test_config.py migration tests: raw['_config_version'] ==
DEFAULT_CONFIG['_config_version'] instead of hardcoded 21
Fixed stale tests against intentional behavior changes:
- test_insights::test_gateway_format_hides_cost: name matches new behavior
(no dollar figures); remove contradicting '$' in text assertion
- test_config::prefers_api_then_url_then_base_url: flipped per PR #9332;
rename + update to base_url > url > api
- test_anthropic_adapter: relax assert_called_once() (xdist-flaky) to
assert called — contract is 'credential flowed through'
- test_interrupt_propagation: add provider/model/_base_url to bare-agent
fixture so the stale-timeout code path resolves
Fixed stale integration tests against opt-in plugin gate:
- transform_tool_result + transform_terminal_output: write plugins.enabled
allow-list to config.yaml and reset the plugin manager singleton
Source fix (real consistency invariant):
- agent/model_metadata.py: add moonshotai/Kimi-K2.6 context length
(262144, same as K2.5). test_model_metadata_has_context_lengths was
correctly catching the gap.
Policy:
- AGENTS.md Testing section: new subsection 'Don't write change-detector
tests' with do/don't examples. Reviewers should reject catalog-snapshot
assertions in new tests.
Covers every test that failed on the last completed main CI run
(24703345583) except test_modal_sandbox_fixes::test_terminal_tool_present
+ test_terminal_and_file_toolsets_resolve_all_tools, which now pass both
alone and with the full tests/tools/ directory (xdist ordering flake that
resolved itself).
Add agent/transports/types.py with three shared dataclasses:
- NormalizedResponse: content, tool_calls, finish_reason, reasoning, usage, provider_data
- ToolCall: id, name, arguments, provider_data (per-tool-call protocol metadata)
- Usage: prompt_tokens, completion_tokens, total_tokens, cached_tokens
Add normalize_anthropic_response_v2() to anthropic_adapter.py — wraps the
existing v1 function and maps its output to NormalizedResponse. One call site
in run_agent.py (the main normalize branch) uses v2 with a back-compat shim
to SimpleNamespace for downstream code.
No ABC, no registry, no streaming, no client lifecycle. Those land in PR 3
with the first concrete transport (AnthropicTransport).
46 new tests:
- test_types.py: dataclass construction, build_tool_call, map_finish_reason
- test_anthropic_normalize_v2.py: v1-vs-v2 regression tests (text, tools,
thinking, mixed, stop reasons, mcp prefix stripping, edge cases)
Part of the provider transport refactor (PR 2 of 9).
Classic-CLI /steer typed during an active agent run was queued through
self._pending_input alongside ordinary user input. process_loop, which
drains that queue, is blocked inside self.chat() for the entire run,
so the queued command was not pulled until AFTER _agent_running had
flipped back to False — at which point process_command() took the idle
fallback ("No agent running; queued as next turn") and delivered the
steer as an ordinary next-turn user message.
From Utku's bug report on PR #13205: mid-run /steer arrived minutes
later at the end of the turn as a /queue-style message, completely
defeating its purpose.
Fix: add _should_handle_steer_command_inline() gating — when
_agent_running is True and the user typed /steer, dispatch
process_command(text) directly from the prompt_toolkit Enter handler
on the UI thread instead of queueing. This mirrors the existing
_should_handle_model_command_inline() pattern for /model and is
safe because agent.steer() is thread-safe (uses _pending_steer_lock,
no prompt_toolkit state mutation, instant return).
No changes to the idle-path behavior: /steer typed with no active
agent still takes the normal queue-and-drain route so the fallback
"No agent running; queued as next turn" message is preserved.
Validation:
- 7 new unit tests in tests/cli/test_cli_steer_busy_path.py covering
the detector, dispatch path, and idle-path control behavior.
- All 21 existing tests in tests/run_agent/test_steer.py still pass.
- Live PTY end-to-end test with real agent + real openrouter model:
22:36:22 API call #1 (model requested execute_code)
22:36:26 ENTER FIRED: agent_running=True, text='/steer ...'
22:36:26 INLINE STEER DISPATCH fired
22:36:43 agent.log: 'Delivered /steer to agent after tool batch'
22:36:44 API call #2 included the steer; response contained marker
Same test on the tip of main without this fix shows the steer
landing as a new user turn ~20s after the run ended.
Delete the stale literal `_PROVIDER_MODELS["ai-gateway"]` (gpt-5,
gemini-2.5-pro, claude-4.5 — outdated the moment PR #13223 landed with
its curated `AI_GATEWAY_MODELS` snapshot) and derive it from
`AI_GATEWAY_MODELS` instead, so the picker tuples and the bare-id
fallback catalog stay in sync automatically. Also fixes
`get_default_model_for_provider('ai-gateway')` to return kimi-k2.6
(the curated recommendation) instead of claude-opus-4.6.
The mid-run steer marker was '[USER STEER (injected mid-run, not tool
output): <text>]'. Replaced with a plain two-newline-prefixed
'User guidance: <text>' suffix.
Rationale: the marker lives inside the tool result's content string
regardless of whether the tool returned JSON, plain text, an MCP
result, or a plugin result. The bracketed tag read like structured
metadata that some tools (terminal, execute_code) could confuse with
their own output formatting. A plain labelled suffix works uniformly
across every content shape we produce.
Behavior unchanged:
- Still injected into the last tool-role message's content.
- Still preserves multimodal (Anthropic) content-block lists by
appending a text block.
- Still drained at both sites added in #12959 and #13205 — per-tool
drain between individual calls, and pre-API-call drain at the top
of each main-loop iteration.
Checked Codex's equivalent (pending_input / inject_user_message_without_turn
in codex-rs/core): they record mid-turn user input as a real role:user
message via record_user_prompt_and_emit_turn_item(). That's cleaner for
their Responses-API model but not portable to Chat Completions where
role alternation after tool_calls is strict. Embedding the guidance in
the last tool result remains the correct placement for us.
Validation: all 21 tests in tests/run_agent/test_steer.py pass.
Aslaaen's fix in the original PR covered _detect_api_mode_for_url and the
two openai/xai sites in run_agent.py. This finishes the sweep: the same
substring-match false-positive class (e.g. https://api.openai.com.evil/v1,
https://proxy/api.openai.com/v1, https://api.anthropic.com.example/v1)
existed in eight more call sites, and the hostname helper was duplicated
in two modules.
- utils: add shared base_url_hostname() (single source of truth).
- hermes_cli/runtime_provider, run_agent: drop local duplicates, import
from utils. Reuse the cached AIAgent._base_url_hostname attribute
everywhere it's already populated.
- agent/auxiliary_client: switch codex-wrap auto-detect, max_completion_tokens
gate (auxiliary_max_tokens_param), and custom-endpoint max_tokens kwarg
selection to hostname equality.
- run_agent: native-anthropic check in the Claude-style model branch
and in the AIAgent init provider-auto-detect branch.
- agent/model_metadata: Anthropic /v1/models context-length lookup.
- hermes_cli/providers.determine_api_mode: anthropic / openai URL
heuristics for custom/unknown providers (the /anthropic path-suffix
convention for third-party gateways is preserved).
- tools/delegate_tool: anthropic detection for delegated subagent
runtimes.
- hermes_cli/setup, hermes_cli/tools_config: setup-wizard vision-endpoint
native-OpenAI detection (paired with deduping the repeated check into
a single is_native_openai boolean per branch).
Tests:
- tests/test_base_url_hostname.py covers the helper directly
(path-containing-host, host-suffix, trailing dot, port, case).
- tests/hermes_cli/test_determine_api_mode_hostname.py adds the same
regression class for determine_api_mode, plus a test that the
/anthropic third-party gateway convention still wins.
Also: add asslaenn5@gmail.com → Aslaaen to scripts/release.py AUTHOR_MAP.
Load-time sanitizer silently removed non-ASCII codepoints from any
env var ending in _API_KEY / _TOKEN / _SECRET / _KEY, turning
copy-paste artifacts (Unicode lookalikes, ZWSP, NBSP) into opaque
provider-side API_KEY_INVALID errors.
Warn once per key to stderr with the offending codepoints (U+XXXX)
and guidance to re-copy from the provider dashboard.
When the live Vercel AI Gateway catalog exposes a Moonshot model with
zero input AND output pricing, it's promoted to position #1 as the
recommended default — even if the exact ID isn't in the curated
AI_GATEWAY_MODELS list. This enables dynamic discovery of new free
Moonshot variants without requiring a PR to update curation.
Paid Moonshot models are unaffected; falls back to the normal curated
recommended tag when no free Moonshot is live.
- Curated AI_GATEWAY_MODELS list in hermes_cli/models.py (OSS first,
kimi-k2.5 as recommended default).
- fetch_ai_gateway_models() filters the curated list against the live
/v1/models catalog; falls back to the snapshot on network failure.
- fetch_ai_gateway_pricing() translates Vercel's input/output field
names to the prompt/completion shape the shared picker expects;
carries input_cache_read / input_cache_write through unchanged.
- get_pricing_for_provider() now handles ai-gateway.
- _model_flow_ai_gateway() provides a guided URL prompt when no key
is set and a pricing-column picker; routes ai-gateway to it instead
of the generic api-key flow.
Requests through Vercel AI Gateway now carry referrerUrl / appName /
User-Agent attribution so traffic shows up in the gateway's analytics.
Adds _AI_GATEWAY_HEADERS in auxiliary_client and a new
ai-gateway.vercel.sh branch in _apply_client_headers_for_base_url.
Users can declare shell scripts in config.yaml under a hooks: block that
fire on plugin-hook events (pre_tool_call, post_tool_call, pre_llm_call,
subagent_stop, etc). Scripts receive JSON on stdin, can return JSON on
stdout to block tool calls or inject context pre-LLM.
Key design:
- Registers closures on existing PluginManager._hooks dict — zero changes
to invoke_hook() call sites
- subprocess.run(shell=False) via shlex.split — no shell injection
- First-use consent per (event, command) pair, persisted to allowlist JSON
- Bypass via --accept-hooks, HERMES_ACCEPT_HOOKS=1, or hooks_auto_accept
- hermes hooks list/test/revoke/doctor CLI subcommands
- Adds subagent_stop hook event fired after delegate_task children exit
- Claude Code compatible response shapes accepted
Cherry-picked from PR #13143 by @pefontana.
Pass the user's configured api_key through local-server detection and
context-length probes (detect_local_server_type, _query_local_context_length,
query_ollama_num_ctx) and use LM Studio's native /api/v1/models endpoint in
fetch_endpoint_model_metadata when a loaded instance is present — so the
probed context length is the actual runtime value the user loaded the model
at, not just the model's theoretical max.
Helps local-LLM users whose auto-detected context length was wrong, causing
compression failures and context-overrun crashes.
When /steer is sent during an API call (model thinking), the steer text
sits in _pending_steer until after the next tool batch — which may never
come if the model returns a final response. In that case the steer is
only delivered as a post-run follow-up, defeating the purpose.
Add a pre-API-call drain at the top of the main loop: before building
api_messages, check _pending_steer and inject into the last tool result
in the messages list. This ensures steers sent during model thinking are
visible on the very next API call.
If no tool result exists yet (first iteration), the steer is restashed
for the post-tool drain to pick up — injecting into a user message would
break role alternation.
Three new tests cover the pre-API-call drain: injection into last tool
result, restash when no tool message exists, and backward scan past
non-tool messages.
- Fix duplicate 'timezone' import in e2e conftest
- Fix test_text_before_command_not_detected asserting send() is awaited
when no agent is present in mock setup (text messages don't produce
command output)
Cherry-picked from PR #13159 by @cdanis.
Adds native media attachment delivery to Signal via signal-cli JSON-RPC
attachments param. Signal messages with media now follow the same
early-return pattern as Telegram/Discord/Matrix — attachments are sent
only with the last chunk to avoid duplicates.
Follow-up fixes on top of the original PR:
- Moved Signal into its own early-return block above the restriction
check (matches Telegram/Discord/Matrix pattern)
- Fixed media_files being sent on every chunk in the generic loop
- Restored restriction/warning guards to simple form (Signal exits early)
- Fixed non-hermetic test writing to /tmp instead of tmp_path
Kimi's gateway selects the correct temperature server-side based on the
active mode (thinking -> 1.0, non-thinking -> 0.6). Sending any
temperature value — even the previously "correct" one — conflicts with
gateway-managed defaults.
Replaces the old approach of forcing specific temperature values (0.6
for non-thinking, 1.0 for thinking) with an OMIT_TEMPERATURE sentinel
that tells all call sites to strip the temperature key from API kwargs
entirely.
Changes:
- agent/auxiliary_client.py: OMIT_TEMPERATURE sentinel, _is_kimi_model()
prefix check (covers all kimi-* models), _fixed_temperature_for_model()
returns sentinel for kimi models. _build_call_kwargs() strips temp.
- run_agent.py: _build_api_kwargs, flush_memories, and summary generation
paths all handle the sentinel by popping/omitting temperature.
- trajectory_compressor.py: _effective_temperature_for_model returns None
for kimi (sentinel mapped), direct client calls use kwargs dict to
conditionally include temperature.
- mini_swe_runner.py: same sentinel handling via wrapper function.
- 6 test files updated: all 'forces temperature X' assertions replaced
with 'temperature not in kwargs' assertions.
Net: -76 lines (171 added, 247 removed).
Inspired by PR #13137 (@kshitijk4poor).
Add dm_policy and group_policy to the WhatsApp adapter, bringing parity
with WeCom/Weixin/QQ. Allows independent control of DM and group access:
disable DMs entirely, allowlist specific senders/groups, or keep open.
- dm_policy: open (default) | allowlist | disabled
- group_policy: open (default) | allowlist | disabled
- Config bridging for YAML → env vars
- 22 tests covering all policy combinations
Backward compatible — defaults preserve existing behavior.
Cherry-picked from PR #11597 by @MassiveMassimo.
Dropped the run.py group auth bypass (would have skipped user auth
for ALL platforms, not just WhatsApp).
Replaces the serial for-loop in tick() with ThreadPoolExecutor so all
jobs due in a single tick run concurrently. A slow job no longer blocks
others from executing, fixing silent job skipping (issue #9086).
Thread safety:
- Session/delivery env vars migrated from os.environ to ContextVars
(gateway/session_context.py) so parallel jobs can't clobber each
other's delivery targets. Each thread gets its own copied context.
- jobs.json read-modify-write cycles (advance_next_run, mark_job_run)
protected by threading.Lock to prevent concurrent save clobber.
- send_message_tool reads delivery vars via get_session_env() for
ContextVar-aware resolution with os.environ fallback.
Configuration:
- cron.max_parallel_jobs in config.yaml (null = unbounded, 1 = serial)
- HERMES_CRON_MAX_PARALLEL env var override
Based on PR #9169 by @VenomMoth1.
Fixes#9086
* feat(security): URL query param + userinfo + form body redaction
Port from nearai/ironclaw#2529.
Hermes already has broad value-shape coverage in agent/redact.py
(30+ vendor prefixes, JWTs, DB connstrs, etc.) but missed three
key-name-based patterns that catch opaque tokens without recognizable
prefixes:
1. URL query params - OAuth callback codes (?code=...),
access_token, refresh_token, signature, etc. These are opaque and
won't match any prefix regex. Now redacted by parameter NAME.
2. URL userinfo (https://user:pass@host) - for non-DB schemes. DB
schemes were already handled by _DB_CONNSTR_RE.
3. Form-urlencoded body (k=v pairs joined by ampersands) -
conservative, only triggers on clean pure-form inputs with no
other text.
Sensitive key allowlist matches ironclaw's (exact case-insensitive,
NOT substring - so token_count and session_id pass through).
Tests: +20 new test cases across 3 test classes. All 75 redact tests
pass; gateway/test_pii_redaction and tools/test_browser_secret_exfil
also green.
Known pre-existing limitation: _ENV_ASSIGN_RE greedy match swallows
whole all-caps ENV-style names + trailing text when followed by
another assignment. Left untouched here (out of scope); URL query
redaction handles the lowercase case.
* feat: replace kimi-k2.5 with kimi-k2.6 on OpenRouter and Nous Portal
Update model catalogs for OpenRouter (fallback snapshot), Nous Portal,
and NVIDIA NIM to reference moonshotai/kimi-k2.6. Add kimi-k2.6 to
the fixed-temperature frozenset in auxiliary_client.py so the 0.6
contract is enforced on aggregator routings.
Native Moonshot provider lists (kimi-coding, kimi-coding-cn, moonshot,
opencode-zen, opencode-go) are unchanged — those use Moonshot's own
model IDs which are unaffected.
Cherry-picked from PR #2545 by @Mibayy.
The setup wizard could leave stt.model: "whisper-1" in config.yaml.
When using the local faster-whisper provider, this crashed with
"Invalid model size 'whisper-1'". Voice messages were silently ignored.
_normalize_local_model() now detects cloud-only names (whisper-1,
gpt-4o-transcribe, etc.) and maps them to the default local model
with a warning. Valid local sizes (tiny, base, small, medium, large-v3)
pass through unchanged.
- Renamed _normalize_local_command_model -> _normalize_local_model
(backward-compat wrapper preserved)
- 6 new tests including integration test
- Added lowercase AUTHOR_MAP alias for @Mibayy
Closes#2544
Prefer session_store origin over _parse_session_key() for shutdown
notifications. Fixes misrouting when chat identifiers contain colons
(e.g. Matrix room IDs like !room123:example.org).
Falls back to session-key parsing when no persisted origin exists.
Co-authored-by: Ruzzgar <ruzzgarcn@gmail.com>
Ref: #12766
Follow-up for PR #12252 salvage:
- Extract 75-line inline repair block to _repair_tool_call_arguments()
module-level helper for testability and readability
- Remove redundant 'import re as _re' (re already imported at line 33)
- Bound the while-True excess-delimiter removal loop to 50 iterations
- Add 17 tests covering all 6 repair stages
- Add sirEven to AUTHOR_MAP in release.py
Cherry-picked from PR #12481 by @Sanjays2402.
Reasoning models (GLM-5.1, QwQ, DeepSeek R1) inflate completion_tokens
with internal thinking tokens. The compression trigger summed
prompt_tokens + completion_tokens, causing premature compression at ~42%
actual context usage instead of the configured 50% threshold.
Now uses only prompt_tokens — completion tokens don't consume context
window space for the next API call.
- 3 new regression tests
- Added AUTHOR_MAP entry for @Sanjays2402
Closes#12026
The opt-in-by-default change (70111eea) requires plugins to be listed
in plugins.enabled. The cherry-picked test fixtures didn't write this
config, so two tests failed on current main.
- discover plugin commands before building Telegram command menus
- make plugin command and context engine accessors lazy-load plugins
- add regression coverage for Telegram menu and plugin lookup paths
Replaces global id +/- 1 context lookup with CTE-based same-session
neighbor queries. When multiple sessions write concurrently, id adjacency
does not imply session adjacency — the old query missed real neighbors.
Co-authored-by: Junass1 <ysfalweshcan@gmail.com>
Custom Claude proxies fronted by Cloudflare with Browser Integrity Check
enabled (e.g. `packyapi.com`) reject requests with the default
`Python-urllib/*` signature, returning HTTP 403 "error code: 1010".
`probe_api_models` swallowed that in its blanket `except Exception:
continue`, so `validate_requested_model` returned the misleading
"Could not reach the <provider> API to validate `<model>`" error even
though the endpoint is reachable and lists the requested model.
Advertise the probe request as `hermes-cli/<version>` so Cloudflare
treats it as a first-party client. This mirrors the pattern already used
by `agent/gemini_native_adapter.py` and `agent/anthropic_adapter.py`,
which set a descriptive UA for the same reason.
Reproduction (pre-fix):
python3 -c "
import urllib.request
req = urllib.request.Request(
'https://www.packyapi.com/v1/models',
headers={'Authorization': 'Bearer sk-...'})
urllib.request.urlopen(req).read()
"
urllib.error.HTTPError: HTTP Error 403: Forbidden
(body: b'error code: 1010')
Any non-urllib UA (Mozilla, curl, reqwest) returns 200 with the
OpenAI-compatible models listing.
Tested on macOS (Python 3.11). No cross-platform concerns — the change
is a single header addition to an existing `urllib.request.Request`.
Cherry-picked from PR #10005 by @houziershi.
Discarded prompts (has_any_reasoning=False) were skipped by `continue`
before being added to completed_in_batch. On --resume they were retried
forever. Now they are added to completed_in_batch before the continue.
- Added AUTHOR_MAP entry for @houziershi
Closes#9950
Cherry-picked from PR #9359 by @luyao618.
- Accept camelCase aliases (apiKey, baseUrl, apiMode, keyEnv, defaultModel,
contextLength, rateLimitDelay) with auto-mapping to snake_case + warning
- Validate URL field values with urlparse (scheme + netloc check) — reject
non-URL strings like 'openai-reverse-proxy' that were silently accepted
- Warn on unknown keys in provider config entries
- Re-order URL field priority: base_url > url > api (was api > url > base_url)
- 12 new tests covering all scenarios
Closes#9332
Plugins now require explicit consent to load. Discovery still finds every
plugin — user-installed, bundled, and pip — so they all show up in
`hermes plugins` and `/plugins`, but the loader only instantiates
plugins whose name appears in `plugins.enabled` in config.yaml. This
removes the previous ambient-execution risk where a newly-installed or
bundled plugin could register hooks, tools, and commands on first run
without the user opting in.
The three-state model is now explicit:
enabled — in plugins.enabled, loads on next session
disabled — in plugins.disabled, never loads (wins over enabled)
not enabled — discovered but never opted in (default for new installs)
`hermes plugins install <repo>` prompts "Enable 'name' now? [y/N]"
(defaults to no). New `--enable` / `--no-enable` flags skip the prompt
for scripted installs. `hermes plugins enable/disable` manage both lists
so a disabled plugin stays explicitly off even if something later adds
it to enabled.
Config migration (schema v20 → v21): existing user plugins already
installed under ~/.hermes/plugins/ (minus anything in plugins.disabled)
are auto-grandfathered into plugins.enabled so upgrades don't silently
break working setups. Bundled plugins are NOT grandfathered — even
existing users have to opt in explicitly.
Also: HERMES_DISABLE_BUNDLED_PLUGINS env var removed (redundant with
opt-in default), cmd_list now shows bundled + user plugins together with
their three-state status, interactive UI tags bundled entries
[bundled], docs updated across plugins.md and built-in-plugins.md.
Validation: 442 plugin/config tests pass. E2E: fresh install discovers
disk-cleanup but does not load it; `hermes plugins enable disk-cleanup`
activates hooks; migration grandfathers existing user plugins correctly
while leaving bundled plugins off.
The original name was cute but non-obvious; disk-cleanup says what it
does. Plugin directory, script, state path, log lines, slash command,
and test module all renamed. No user-visible state exists yet, so no
migration path is needed.
New website page "Built-in Plugins" documents the <repo>/plugins/<name>/
source, how discovery interacts with user/project plugins, the
HERMES_DISABLE_BUNDLED_PLUGINS escape hatch, disk-cleanup's hook
behaviour and deletion rules, and guidance on when a plugin belongs
bundled vs. user-installable. Added to the Features → Core sidebar next
to the main Plugins page, with a cross-reference from plugins.md.
Rewires @LVT382009's disk-guardian (PR #12212) from a skill-plus-script
into a plugin that runs entirely via hooks — no agent compliance needed.
- post_tool_call hook auto-tracks files created by write_file / terminal
/ patch when they match test_/tmp_/*.test.* patterns under HERMES_HOME
- on_session_end hook runs cmd_quick cleanup when test files were
auto-tracked during the turn; stays quiet otherwise
- /disk-guardian slash command keeps status / dry-run / quick / deep /
track / forget for manual use
- Deterministic cleanup rules, path safety, atomic writes, and audit
logging preserved from the original contribution
- Protect well-known top-level state dirs (logs/, memories/, sessions/,
cron/, cache/, etc.) from empty-dir removal so fresh installs don't
get gutted on first session end
The plugin system gains a bundled-plugin discovery path (<repo>/plugins/
<name>/) alongside user/project/entry-point sources. Memory and
context_engine subdirs are skipped — they keep their own discovery
paths. HERMES_DISABLE_BUNDLED_PLUGINS=1 suppresses the scan; the test
conftest sets it by default so existing plugin tests stay clean.
Co-authored-by: LVT382009 <levantam.98.2324@gmail.com>
OpenAI-compatible clients (Open WebUI, LobeChat, etc.) can now send vision
requests to the API server. Both endpoints accept the canonical OpenAI
multimodal shape:
Chat Completions: {type: text|image_url, image_url: {url, detail?}}
Responses: {type: input_text|input_image, image_url: <str>, detail?}
The server validates and converts both into a single internal shape that the
existing agent pipeline already handles (Anthropic adapter converts,
OpenAI-wire providers pass through). Remote http(s) URLs and data:image/*
URLs are supported.
Uploaded files (file, input_file, file_id) and non-image data: URLs are
rejected with 400 unsupported_content_type.
Changes:
- gateway/platforms/api_server.py
- _normalize_multimodal_content(): validates + normalizes both Chat and
Responses content shapes. Returns a plain string for text-only content
(preserves prompt-cache behavior on existing callers) or a canonical
[{type:text|image_url,...}] list when images are present.
- _content_has_visible_payload(): replaces the bare truthy check so a
user turn with only an image no longer rejects as 'No user message'.
- _handle_chat_completions and _handle_responses both call the new helper
for user/assistant content; system messages continue to flatten to text.
- Codex conversation_history, input[], and inline history paths all share
the same validator. No duplicated normalizers.
- run_agent.py
- _summarize_user_message_for_log(): produces a short string summary
('[1 image] describe this') from list content for logging, spinner
previews, and trajectory writes. Fixes AttributeError when list
user_message hit user_message[:80] + '...' / .replace().
- _chat_content_to_responses_parts(): module-level helper that converts
chat-style multimodal content to Responses 'input_text'/'input_image'
parts. Used in _chat_messages_to_responses_input for Codex routing.
- _preflight_codex_input_items() now validates and passes through list
content parts for user/assistant messages instead of stringifying.
- tests/gateway/test_api_server_multimodal.py (new, 38 tests)
- Unit coverage for _normalize_multimodal_content, including both part
formats, data URL gating, and all reject paths.
- Real aiohttp HTTP integration on /v1/chat/completions and /v1/responses
verifying multimodal payloads reach _run_agent intact.
- 400 coverage for file / input_file / non-image data URL.
- tests/run_agent/test_run_agent_multimodal_prologue.py (new)
- Regression coverage for the prologue no-crash contract.
- _chat_content_to_responses_parts round-trip coverage.
- website/docs/user-guide/features/api-server.md
- Inline image examples for both endpoints.
- Updated Limitations: files still unsupported, images now supported.
Validated live against openrouter/anthropic/claude-opus-4.6:
POST /v1/chat/completions → 200, vision-accurate description
POST /v1/responses → 200, same image, clean output_text
POST /v1/chat/completions [file] → 400 unsupported_content_type
POST /v1/responses [input_file] → 400 unsupported_content_type
POST /v1/responses [non-image data URL] → 400 unsupported_content_type
Closes#5621, #8253, #4046, #6632.
Co-authored-by: Paul Bergeron <paul@gamma.app>
Co-authored-by: zhangxicen <zhangxicen@example.com>
Co-authored-by: Manuel Schipper <manuelschipper@users.noreply.github.com>
Co-authored-by: pradeep7127 <pradeep7127@users.noreply.github.com>
Closes#8933 more fully, extending the per-tool transform_terminal_output
hook from #12929 to a generic seam that fires after every tool dispatch.
Plugins can rewrite any tool's result string (normalize formats, redact
fields, summarize verbose output) without wrapping individual tools.
Changes
- hermes_cli/plugins.py: add "transform_tool_result" to VALID_HOOKS
- model_tools.py: invoke the hook in handle_function_call after
post_tool_call (which remains observational); first valid str return
replaces the result; fail-open
- tests/test_transform_tool_result_hook.py: 9 new tests covering no-op,
None return, non-string return, first-match wins, kwargs, hook
exception fallback, post_tool_call observation invariant, ordering
vs post_tool_call, and an end-to-end real-plugin integration
- tests/hermes_cli/test_plugins.py: assert new hook in VALID_HOOKS
- tests/test_model_tools.py: extend the hook-call-sequence assertion
to include the new hook
Design
- transform_tool_result runs AFTER post_tool_call so observers always
see the original (untransformed) result. This keeps post_tool_call's
observational contract.
- transform_terminal_output (from #12929) still runs earlier, inside
terminal_tool, so plugins can canonicalize BEFORE the 50k truncation
drops middle content. Both hooks coexist; they target different layers.
SessionStore.prune_old_entries was calling
self._has_active_processes_fn(entry.session_id) but the callback wired
up in gateway/run.py is process_registry.has_active_for_session, which
compares against session_key, not session_id. Every other caller in
session.py (_is_session_expired, _should_reset) already passes
session_key, so prune was the only outlier — and because session_id and
session_key live in different namespaces, the guard never fired.
Result in production: sessions with live background processes (queued
cron output, detached agents, long-running Bash) were pruned out of
_entries despite the docstring promising they'd be preserved. When the
process finished and tried to deliver output, the session_key to
session_id mapping was gone and the work was effectively orphaned.
Also update the existing test_prune_skips_entries_with_active_processes,
which was checking the wrong interface (its mock callback took session_id
so it agreed with the buggy implementation). The test now uses a
session_key-based mock, matching the production callback's real contract,
and a new regression guard test pins the behaviour.
Swallowed exceptions inside the prune loop now log at debug level instead
of silently disappearing.
After a conversation gets compressed, run_agent's _compress_context ends
the parent session and creates a continuation child with the same logical
conversation. Every list affordance in the codebase (list_sessions_rich
with its default include_children=False, plus the CLI/TUI/gateway/ACP
surfaces on top of it) hid those children, and resume-by-ID on the old
root landed on a dead parent with no messages.
Fix: lineage-aware projection on the read path.
- hermes_state.py::get_compression_tip(session_id) — walk the chain
forward using parent.end_reason='compression' AND
child.started_at >= parent.ended_at. The timing guard separates
compression continuations from delegate subagents (which were created
while the parent was still live) without needing a schema migration.
- hermes_state.py::list_sessions_rich — new project_compression_tips
flag (default True). For each compressed root in the result, replace
surfaced fields (id, ended_at, end_reason, message_count,
tool_call_count, title, last_active, preview, model, system_prompt)
with the tip's values. Preserve the root's started_at so chronological
ordering stays stable. Projected rows carry _lineage_root_id for
downstream consumers. Pass False to get raw roots (admin/debug).
- hermes_cli/main.py::_resolve_session_by_name_or_id — project forward
after ID/title resolution, so users who remember an old root ID (from
notes, or from exit summaries produced before the sibling Bug 1 fix)
land on the live tip.
All downstream callers of list_sessions_rich benefit automatically:
- cli.py _list_recent_sessions (/resume, show_history affordance)
- hermes_cli/main.py sessions list / sessions browse
- tui_gateway session.list picker
- gateway/run.py /resume titled session listing
- tools/session_search_tool.py
- acp_adapter/session.py
Tests: 7 new in TestCompressionChainProjection covering full-chain walks,
delegate-child exclusion, tip surfacing with lineage tracking, raw-root
mode, chronological ordering, and broken-chain graceful fallback.
Verified live: ran a real _compress_context on a live Gemini-backed
session, confirmed the DB split, then verified
- db.list_sessions_rich surfaces tip with _lineage_root_id set
- hermes sessions list shows the tip, not the ended parent
- _resolve_session_by_name_or_id(old_root_id) -> tip_id
- _resolve_last_session -> tip_id
Addresses #10373.
On macOS, Unix domain socket paths are capped at 104 bytes (sun_path).
SSH appends a 16-byte random suffix to the ControlPath when operating
in ControlMaster mode. With an IPv6 host embedded literally in the
filename and a deeply-nested macOS $TMPDIR like
/var/folders/XX/YYYYYYYYYYYY/T/, the full path reliably exceeds the
limit — every terminal/file-op tool call then fails immediately with
``unix_listener: path "…" too long for Unix domain socket``.
Swap the ``user@host:port.sock`` filename for a sha256-derived 16-char
hex digest. The digest is deterministic for a given (user, host, port)
triple, so ControlMaster reuse across reconnects is preserved, and the
full path fits comfortably under the limit even after SSH's random
suffix. Collision space is 2^64 — effectively unreachable for the
handful of concurrent connections any single Hermes process holds.
Regression tests cover: path length under realistic macOS $TMPDIR with
the IPv6 host from the issue report, determinism for reconnects, and
distinctness across different (user, host, port) triples.
Closes#11840
Four parametrized cases that pin down the running-agent guard behavior:
/yolo and /verbose dispatch mid-run; /fast and /reasoning get the
"can't run mid-turn" catch-all. Prevents the allowlist from silently
drifting in either direction.
Follow-up to #12704. The SignalAdapter can resolve +E164 numbers to
UUIDs via listContacts, but _parse_target_ref() in the send_message
tool rejected '+' as non-digit and fell through to channel-name
resolution — which fails for contacts without a prior session entry.
Adds an E.164 branch in _parse_target_ref for phone-based platforms
(signal, sms, whatsapp) that preserves the leading '+' so downstream
adapters keep the format they expect. Non-phone platforms are
unaffected.
Reported by @qdrop17 on Discord after pulling #12704.