Splits the standalone molecule-mcp wrapper into three single-concern
modules per the OSS-shape refactor program:
* mcp_heartbeat.py — register POST + heartbeat loop + auth-failure
escalation + inbound-secret persistence
* mcp_workspace_resolver.py — single + multi-workspace env validation
+ on-disk token-file read + operator-help printer
* mcp_inbox_pollers.py — activate inbox singleton + spawn one daemon
poller per workspace
mcp_cli.py becomes a 193-LOC orchestrator: validates env, calls each
module's helpers, hands off to a2a_mcp_server.cli_main. The console-
script entry molecule-mcp = molecule_runtime.mcp_cli:main is preserved.
Back-compat aliases (mcp_cli._build_agent_card, _heartbeat_loop,
_resolve_workspaces, etc.) re-export the new modules' authoritative
functions so existing tests + wheel_smoke.py + any downstream caller
keeps working unchanged. A new test file pins each alias as the
exact same callable (drift gate via `is`).
Tests:
* 62 existing test_mcp_cli.py + test_mcp_cli_multi_workspace.py
pass against the split.
* Two heartbeat-loop persist tests + the auth-escalation caplog
setup updated to target mcp_heartbeat (the module where the loop
body now lives) instead of mcp_cli (still works through aliases
for direct calls, but Python's name resolution inside the loop
body uses the new module's namespace).
* test_mcp_cli_split.py adds 11 new tests: alias drift gate +
inbox-poller single + multi-workspace branches + degraded
inbox-import logging path (none of those existed before).
Refs RFC #2873.
Two call sites — workspace_provision.go:537 and org_import.go:54 —
duplicated the same `if runtime == "claude-code"` branch deciding
the default model when the operator/agent didn't supply one. They
were copy-pasted; nothing prevented them from drifting silently.
Extract to `models.DefaultModel(runtime string) string`. Both call
sites now route through the helper. New runtimes need one entry
in DefaultModel + one assertion in TestDefaultModel — pre-fix it
required two source edits + an audit.
Foundation for the future `RuntimeConfig` interface (RFC #2873 +
task #231): once we add `ProvisioningTimeout()`, `CapabilitiesSupported()`
etc., the helper expands to per-runtime structs and `DefaultModel`
becomes one method on the interface.
## Coverage
15 unit tests pinning the exact contract:
- claude-code → "sonnet"
- 9 other known runtimes → universal default
- empty + unknown → universal default (matches pre-refactor fallthrough)
- case-sensitivity preserved (CLAUDE-CODE → universal default)
Plus invariant test: `DefaultModel` never returns "" — protects
against a future "return early on unknown" regression that would
silently break workspace creation.
## Verification
- go build ./... clean
- 15 model unit tests pass
- existing handler tests untouched (no behavior change at call sites)
- identical output to pre-refactor for every input
First iteration of the OSS-shape refactor program. Each PR meets all
7 bars (plugin/abstract/modular/SSOT/coverage/cleanup/file-split).
Refs RFC #2873.
Every staging push run for the last 4 SHAs was cancelled by the
matching pull_request run because both fired into the same
concurrency group:
group: ${{ github.workflow }}-${{ ...sha }}
Same SHA → same group → cancel-in-progress=true means the second
arrival cancels the first. Empirically the push run lost the race;
staging branch-protection then saw a CANCELLED required check and
the auto-promote chain stalled.
Fix: include github.event_name in the group key. push and
pull_request runs for the same SHA now hash to different groups,
both complete, both report SUCCESS to branch protection.
Pattern of the bug:
10:46 sha=1e8d7ae1 ev=pull_request conclusion=success
10:46 sha=1e8d7ae1 ev=push conclusion=cancelled
10:45 sha=ecf5f6fb ev=pull_request conclusion=success
10:45 sha=ecf5f6fb ev=push conclusion=cancelled
10:28 sha=471dff25 ev=pull_request conclusion=success
10:28 sha=471dff25 ev=push conclusion=cancelled
10:12 sha=9e678ccd ev=pull_request conclusion=success
10:12 sha=9e678ccd ev=push conclusion=cancelled
Same drift class as the 2026-04-28 auto-promote-staging incident
(memory: feedback_concurrency_group_per_sha.md) — globally-scoped
groups silently cancel runs in matched-SHA scenarios.
This is the only workflow in .github/workflows/ that uses the
narrow per-sha shape without event_name. Others either don't use
concurrency at all, or use ${{ github.ref }} which is event-
neutral.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Previous workflow applied only 049_delegations.up.sql — fragile to
future migrations that touch the delegations table or any other
handlers/-tested table. Operator would have to remember to update
the workflow's psql -f line per migration.
New behavior: loop every .up.sql in lexicographic order, apply each
with ON_ERROR_STOP=1 + per-migration result captured. Failed migrations
are SKIPPED rather than blocking the suite — handles the historical
migrations (017_memories_fts_namespace, 042_a2a_queue, etc.) that
depend on tables since renamed/dropped and can't replay from scratch.
Migrations that DO succeed land their tables, which is sufficient for
the integration tests in handlers/.
Sanity gate at the end: if the delegations table is missing after the
replay, hard-fail with a loud error. That catches a real regression
where 049 itself becomes broken (e.g., schema rename), separate from
the historical-broken-migration noise above.
Per-migration log line ("✓" or "⊘ skipped") makes it easy to spot
when a migration that SHOULD have replayed didn't.
Verified locally: full migration chain runs, 049 lands, all 7
integration tests pass against the chained-migration DB.
Closes#320.
Three small follow-ups from #2866 self-review:
1. TestIntegration_Sweeper_StaleHeartbeatIsMarkedStuck — assert
strings.Contains(errDet, "no heartbeat for") instead of != "".
The original "non-empty" check passes for any error_detail value;
if a future regression swaps the message format, the test wouldn't
catch it. Pin the production format string explicitly.
2. TestIntegration_Sweeper_DeadlineExceededIsMarkedFailed — drop the
redundant `last_heartbeat = now()` write. The sweeper checks
deadline FIRST (the stronger statement) and short-circuits before
evaluating heartbeat staleness, so the heartbeat field is irrelevant
for that test path.
3. integrationDB doc comment now warns explicitly that the helper is
NOT t.Parallel()-safe — it hot-swaps the package-level mdb.DB and
restores via t.Cleanup. If a future contributor adds t.Parallel()
to one of these tests they race on the global. Comment makes the
constraint discoverable instead of a debugging surprise.
All 7 integration tests still pass against real Postgres locally.
OrgHandler.Import was non-idempotent — every call INSERTed a fresh row
for every workspace in the tree, regardless of whether matching
workspaces already existed. Calling /org/import twice with the same
template duplicated the entire tree.
This was the bigger leak source than TeamHandler.Expand (deleted in
PR #2856). tenant-hongming accumulated 72 distinct child workspaces
in 4 days entirely from repeated org-template spawns of the same
template — the (tier × runtime) matrix in the audit data was the
template's static shape, multiplied by spawn count.
Fix: route through a new lookupExistingChild helper before INSERT.
Skip-if-exists semantics by default:
- Match on (parent_id, name) using `IS NOT DISTINCT FROM` so NULL
parents (root workspaces) are included.
- Ignore status='removed' rows so collapsed teams or deleted
workspaces don't block re-import.
- Recursion still runs on the existing id so partial-match templates
(parent exists, some children missing) backfill correctly instead
of either no-op'ing the whole subtree or duplicating the existing
children.
- Result entries for skipped nodes carry skipped:true so callers
(canvas Import preflight modal) can surface "5 of 7 already
existed, 2 created."
The recursion that walked ws.Children is extracted into
recurseChildrenForImport so both the create-path and the skip-path
share one implementation — no duplicated grid math, no two paths to
keep in sync.
Note: replace_if_exists semantics (re-roll: stop+delete old, create
new) are deferred. Skip-if-exists alone closes the leak; re-roll is
a later UX decision for the canvas Import preflight modal.
Tests:
- 4 sqlmock cases on lookupExistingChild: not-found, found,
nil-parent (the IS NOT DISTINCT FROM NULL trick), DB-error
propagates (must fail fast — silent fallback to INSERT is the
failure mode the helper exists to prevent).
- 1 source-level AST gate (per memory feedback_behavior_based_ast_gates.md):
pins that h.lookupExistingChild( appears BEFORE INSERT INTO workspaces
in org_import.go. If a future refactor reintroduces the un-checked
INSERT, the gate fails. Verified load-bearing by removing the call —
build fails (helper symbol gone).
go vet ./... clean. go test ./internal/handlers/ -count 1 — all green
(4.2s, no regression on existing OrgImport / Provision / Team tests).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Real-Postgres tests for the RFC #2829 PR-3 sweeper. Validates:
- Deadline-exceeded rows are marked failed with the expected
error_detail
- Stale-heartbeat in-flight rows are marked stuck (uses
DELEGATION_STUCK_THRESHOLD_S env override for deterministic
timing)
- Healthy rows (fresh heartbeat + future deadline) are not touched
— no false-positive against well-behaved delegations
These extend the gate added in the previous commit so the workflow
catches sweeper regressions, not just ledger-write ones. All 7
integration tests now pass; CI workflow runs them all.
Multi-model review of #2862 caught a non-load-bearing assertion: the
test used \`expect(labels).not.toContain(expect.stringMatching(...))\`
to claim the "Expand to Team" right-click item is gone. But vitest's
toContain uses Object.is/===, so asymmetric matchers like
expect.stringMatching are plain objects that never === any string —
the assertion silently passed for ANY string array, including arrays
that DID contain "Expand to Team". The test would have green-lit the
unfixed code.
Switch to the literal substring shape the rest of this file already
uses (see lines 175/183/254 — labels.some((l) => l.includes(...))).
Verified the new assertion is load-bearing:
1. Reintroduced \`{ label: "Expand to Team", ... }\` into the
childless-workspace branch of ContextMenu.tsx
2. Ran the test — failed at the new assertion line as expected
3. Reverted the regression — test passes again
Net diff: replaces one broken expect with one correct expect + a
WHY-comment noting the toContain/asymmetric-matcher gotcha so the
next reader (or test writer) doesn't reintroduce the same shape.
Per memory feedback_assert_exact_not_substring.md: pin assertions
that fail on the old code path; this assertion never fired even on
the bug it was written to catch.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Pairs with PR #2856 which removed the backend POST /workspaces/:id/expand
route. With the backend gone, the canvas right-click "Expand to Team"
button calls a 404. Remove the button and its callback.
ContextMenu.tsx:
- Delete handleExpand callback (8 lines)
- Drop the "Expand to Team" item from the childless-workspace menu
array; childless workspaces now only show the regular actions
(Extract from Team / Export Bundle / Duplicate / Pause / Restart /
Delete).
Toolbar.tsx:
- Drop "expand," from the right-click help-text shortcut.
ContextMenu.keyboard.test.tsx — two new pinning cases:
- "'Expand to Team' menu item is gone (childless workspace)" —
asserts the label literal is absent + the regular actions
(Delete, Restart) are still present.
- "'Collapse Team' is still present when the workspace HAS children" —
sanity that the parent-with-children menu (Arrange Children /
Collapse Team / Zoom to Team) didn't regress.
How users create children now: the existing + New Workspace dialog
(CreateWorkspaceDialog.tsx) already has a parent picker. No new UI
needed — every workspace can be a parent via the regular Create
flow with parent_id set.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two-part PR:
## Fix: result_preview was lost on completion
Self-review of #2854 caught a real bug. SetStatus has a same-status
replay no-op; the order of calls in `executeDelegation` completion
+ `UpdateStatus` completed branch clobbered the preview field:
1. updateDelegationStatus(completed, "") fires
2. inner recordLedgerStatus(completed, "", "")
→ SetStatus transitions dispatched → completed with preview=""
3. outer recordLedgerStatus(completed, "", responseText)
→ SetStatus reads current=completed, status=completed
→ SAME-STATUS NO-OP, never writes responseText → preview lost
Confirmed against real Postgres (see integration test). Strict-sqlmock
unit tests passed because they pin SQL shape, not row state.
Fix: call the WITH-PREVIEW recordLedgerStatus FIRST, then
updateDelegationStatus. The inner call becomes the no-op (correctly
preserves the row written by the outer call).
Same gap fixed in UpdateStatus handler — body.ResponsePreview was
never landing in the ledger because updateDelegationStatus's nested
SetStatus(completed, "", "") fired first.
## Gate: real-Postgres integration tests + CI workflow
The unit-test-only workflow that shipped #2854 was the root cause.
Adding two layers of defense:
1. workspace-server/internal/handlers/delegation_ledger_integration_test.go
— `//go:build integration` tag, requires INTEGRATION_DB_URL env var.
4 tests:
* ResultPreviewPreservedThroughCompletion (regression gate for the
bug above — fires the production call sequence in fixed order
and asserts row.result_preview matches)
* ResultPreviewBuggyOrderIsLost (DIAGNOSTIC: confirms the
same-status no-op contract works as designed; if SetStatus's
semantics ever change, this test fires)
* FailedTransitionCapturesErrorDetail (failure-path symmetry)
* FullLifecycle_QueuedToDispatchedToCompleted (forward-only +
happy path)
2. .github/workflows/handlers-postgres-integration.yml
— required check on staging branch protection. Spins postgres:15
service container, applies the delegations migration, runs
`go test -tags=integration` against the live DB. Always-runs +
per-step gating on path filter (handlers/wsauth/migrations) so
the required-check name is satisfied on PRs that don't touch
relevant code.
Local dev workflow (file header documents this):
docker run --rm -d --name pg -e POSTGRES_PASSWORD=test -p 55432:5432 postgres:15-alpine
psql ... < workspace-server/migrations/049_delegations.up.sql
INTEGRATION_DB_URL="postgres://postgres:test@localhost:55432/molecule?sslmode=disable" \
go test -tags=integration ./internal/handlers/ -run "^TestIntegration_"
## Why this matters
Per memory `feedback_mandatory_local_e2e_before_ship`: backend PRs
MUST verify against real Postgres before claiming done. sqlmock pins
SQL shape; only a real DB can verify row state. The workflow makes
this gate mandatory rather than optional.
Every workspace can have children via the regular CreateWorkspace flow
with parent_id set, so a separate handler that bulk-creates from
config.yaml's sub_workspaces (and was non-idempotent — calling it twice
duplicated the team) earned its way out. "Team" is just the state of
having children; expanding/collapsing is purely a canvas-side visual
action that toggles the `collapsed` column via PATCH.
The non-idempotency directly caused tenant-hongming's vCPU starvation:
72 distinct child workspaces accumulated in 4 days, ~14 leaked EC2s
(50 of 64 vCPU consumed by stale teams), every Canvas tabs E2E retry
flaking on RunInstances VcpuLimitExceeded.
What stays:
- TeamHandler.Collapse — still useful; stops + removes children via
StopWorkspaceAuto. Reachable from the canvas Collapse Team button.
(Note: that button currently calls PATCH /workspaces/:id, not the
Collapse endpoint — that's a separate reachability question for
later.)
- findTemplateDirByName helper — kept in team.go pending a relocate
decision; no in-package consumers after Expand.
- The four other paths that create child workspaces continue to work
unchanged: regular POST /workspaces with parent_id, OrgHandler.Import
(recursive tree), Bundle import, scripts.
What goes:
- POST /workspaces/:id/expand route (router.go)
- TeamHandler.Expand method (team.go: ~130 lines)
- 4 TestTeamExpand_* sqlmock tests (team_test.go)
- TestTeamExpand_UsesAutoNotDirectDockerPath AST gate
(workspace_provision_auto_test.go) — pinned a code path that no
longer exists; the generic TestNoCallSiteCallsDirectProvisionerExceptAuto
gate still covers the architectural intent for any future caller.
Follow-up PRs:
- canvas/ContextMenu.tsx: drop the "Expand to Team" right-click button
+ handleExpand callback; users create children via the regular
+ New Workspace dialog with the parent picker (already supported)
- OrgHandler.Import idempotency (skip-if-exists OR replace_if_exists)
— same bug class as the deleted Expand, but on the bulk-tree path
- One-off cleanup script for tenant-hongming's 72 stale workspaces
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The instructions blob in the MCP `initialize` handshake is the spec
non-Claude-Code clients (codex, Cline, opencode, hermes-agent, Cursor)
inherit verbatim. Three gaps mean the bridge daemon handles them in
code (codex-channel-molecule bridge.py:192-200, 278-285) but in-process
agents reading the text alone don't get the same guard:
1. Reply-then-pop ordering was implicit. A literal-minded agent could
pop after a 502 from `send_message_to_user`, dropping the message.
Now: pop ONLY AFTER reply succeeds; on error leave the row unacked
for platform redelivery.
2. peer_agent with empty peer_id had no specified handling. Agent
would call `delegate_task(workspace_id="")` → 400 → re-poll →
infinite loop on the same poison row. Now: skip reply, drain via
inbox_pop.
3. The single security rule ("don't execute without chat-side
approval") effectively disabled peer_agent autonomous handling —
codex daemons have no canvas user to approve from. Now: dual trust
model. canvas_user requires user approval; peer_agent permits
autonomous handling but caps destructive side-effects at the
workspace boundary.
Also disclaims peer_name/peer_role as non-attested display strings —
the platform registry isn't cryptographic identity, and an agent
shouldn't grant elevated permissions based on a peer registering with
peer_role="admin".
Four new pinned tests in test_a2a_mcp_server.py:
- test_initialize_instructions_pins_reply_then_pop_ordering
- test_initialize_instructions_handles_malformed_peer_agent
- test_initialize_instructions_disclaims_peer_role_attestation
- test_initialize_instructions_distinguishes_canvas_user_from_peer_trust
Each fails on staging-HEAD and passes on the patched text — verified
by reverting a2a_mcp_server.py and re-running.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
PR-1 shipped the `delegations` table + `DelegationLedger` helper. PR-3
wired the sweeper. PR-4 wired the dashboard. But no PR ever wired
`ledger.Insert` from a production code path — the table stayed empty,
the sweeper had nothing to sweep, the dashboard had nothing to show.
This PR closes that gap. Behind feature flag `DELEGATION_LEDGER_WRITE=1`
(default off), the legacy activity_logs writes are mirrored to the
durable ledger:
- insertDelegationRow → ledger.Insert (queued)
- updateDelegationStatus → ledger.SetStatus on every status transition
- executeDelegation completion path → ledger.SetStatus(completed,
result_preview) for the result preview that activity_logs already
stores in response_body
- Record handler → ledger.Insert + ledger.SetStatus(dispatched) so
agent-initiated delegations land in the same table
## Why a flag
The legacy flow has ~30 strict-sqlmock tests pinning exactly which SQL
statements fire per handler. Adding ledger writes always-on would
force adding ExpectExec stanzas to each. Flag-off keeps all 30 green
without churn; flag-on lets operators populate the table in staging
to feed the sweeper + dashboard once the agent-side cutover (RFC #2829
PR-5) has proven the round-trip end-to-end.
Default off → byte-identical to pre-#318 behavior.
## Status vocabulary mapping
activity_logs uses a freer status vocabulary than the ledger's CHECK
constraint allows. updateDelegationStatus is called with values like
"received" that the ledger doesn't accept; the wiring filters via a
switch to only forward known-good values, skipping anything else.
Record's first activity_logs row is `dispatched` but the ledger's
Insert path requires `queued` as initial state. Insert as queued first;
the very next SetStatus(..., dispatched) promotes it on the same row.
## Coverage
8 wiring tests (delegation_ledger_writes_test.go):
- flag off → no SQL fired (rollout safety contract)
- flag on → INSERT + UPDATE fire as expected
- flag rejects loose truthy values (true/yes/0/on/TRUE) — only "1"
is the on signal, matching PR-2 + PR-5 conventions
- terminal-state replay swallows ErrInvalidTransition (legacy is
authoritative; ledger replay error is not a delegation failure)
All 30 existing delegation_test.go tests still pass — flag default off
keeps the strict-sqlmock surface unchanged.
Refs RFC #2829.
workspace.go was 950 lines after the dispatcher work in PRs #2811 +
#2824 + #2843 + #2846 + #2847 + #2848 + #2850. This extracts the 6
SoT dispatcher helpers into a new workspace_dispatchers.go so the
file is the architectural unit it deserves to be (one place for
"how do we route a workspace lifecycle verb to a backend?").
Moved (no body changes — pure cut + paste with imports):
- HasProvisioner (gate accessor)
- provisionWorkspaceAuto (async provision)
- provisionWorkspaceAutoSync (sync provision, runRestartCycle's path)
- StopWorkspaceAuto (stop dispatcher)
- RestartWorkspaceAuto (restart wrapper)
- RestartWorkspaceAutoOpts (restart with resetClaudeSession)
workspace.go shrinks from 950 → 735 lines and now holds:
- WorkspaceHandler struct + constructor
- SetCPProvisioner / SetEnvMutators
- Create / List / Get / scanWorkspaceRow
- HTTP handler glue
workspace_dispatchers.go is 255 lines and holds the dispatcher trio +
sync variant + gate accessor + a header docblock summarizing the
history (PRs that added each helper) and the source-level pin tests
that gate against drift.
Source-level pin tests updated:
- TestNoCallSiteCallsDirectProvisionerExceptAuto: workspace_dispatchers.go
added to allowlist (the dispatcher IS the place that calls per-backend
bodies directly).
- TestNoCallSiteCallsBareStop: same.
- TestNoBareBothNilCheck / TestOrgImportGate_UsesHasProvisionerNotBareField:
no change — they were source-pinning specific files, not all callers.
Build clean, vet clean, full test suite passes (1742 / 0 in workspace,
all Go test packages green).
Out of scope (#2800 has more):
- workspace_provision.go (869 lines) split into Docker + CP halves —
files would still be 400+ each, marginal value. Defer until a
third backend lands and the symmetry breaks.
- Splitting Create / List / Get into per-handler files — they're
short and tightly coupled to the struct; keep co-located.
Closes#2800 partial. Filing a follow-up issue if/when workspace.go
or workspace_provision.go grows past 800 lines again.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Self-review of #2852: the inline comment on the IssueToken-failed branch
still referenced POST /workspaces/:id/tokens, which never shipped. The
recovery path that did ship in #2852 is POST /workspaces/:id/external/rotate.
Update the hint so the next operator who hits this failure mode finds
the right endpoint.
External workspaces (runtime=external) lose their workspace_auth_token
the moment the create modal closes — the token is unrecoverable from
any later DB read. Operators who lost their copy or want to respond to
a suspected leak had no recovery path short of recreating the workspace
(which also breaks cross-workspace delegation links + memory namespace).
This PR adds two endpoints + a Config-tab section that surfaces them:
POST /workspaces/:id/external/rotate
Revokes any prior live tokens, mints a fresh one, returns the same
ExternalConnectionInfo payload Create returns. Old credentials stop
working immediately — the previously-paired agent will fail auth on
its next heartbeat (~20s).
GET /workspaces/:id/external/connection
Returns the connect block with auth_token="". For the operator who
just needs to re-find PLATFORM_URL / WORKSPACE_ID / one of the
snippets without invalidating the live agent.
Both reject runtime ≠ external with 400 + a hint pointing at /restart
for non-external runtimes (which mints AND injects into the container).
## Why a flag isn't needed
The endpoints are purely additive — Create's behavior is unchanged.
Existing external workspaces don't see anything different until an
operator clicks the new buttons.
## DRY refactor
Extracted BuildExternalConnectionPayload() in external_connection.go
as the single source of truth for the connect payload shape. Create,
Rotate, and GetExternalConnection all call it. Adds a snippet once →
all three endpoints emit it. Trims trailing slash on platform_url so
no double-slash sneaks into registry_endpoint.
## Canvas
ExternalConnectionSection mounts in ConfigTab when runtime=external.
Two buttons:
- "Show connection info" (cosmetic) — fetches GET /external/connection
- "Rotate credentials" (destructive) — confirm dialog explains the
impact, then POST /external/rotate
Both reuse the existing ExternalConnectModal so operators don't learn
a second snippet UX.
## Coverage
10 Go tests:
- Rotate happy path (revoke + mint order, payload shape, broadcast event)
- Rotate refuses non-external runtimes (400 with restart hint)
- Rotate 404 on unknown workspace + 400 on empty id
- GetExternalConnection happy path (auth_token="", same payload shape)
- GetExternalConnection refuses non-external + 404 on unknown
- BuildExternalConnectionPayload — placeholder substitution + trailing
slash trimming + blank-token contract
6 canvas tests:
- both action buttons render
- "Show" calls GET /external/connection and opens modal
- "Rotate" opens confirm dialog before firing POST
- Cancel dismisses without rotating
- Confirm POSTs and opens modal with returned token
- API failures surface as visible error chips
Migration: existing external workspaces gain new abilities; no data
migration. The DRY refactor preserves byte-identical Create response
shape (8 ConfigTab tests + all existing handler tests still pass).
Closes#319.
Pre-fix _peer_metadata was an unbounded dict — a workspace receiving
from N distinct peers across its lifetime accumulated entries
indefinitely (~100 bytes × N). Not crash-class at typical scale (10K
peers ≈ 1 MB) but unbounded. The TTL-at-read pattern bounded
staleness but did nothing for memory.
Fix: hand-rolled LRU on top of OrderedDict. No new dependency.
- _PEER_METADATA_MAXSIZE = 1024 (issue's recommended bound)
- _peer_metadata_get(canon) — read + LRU touch (move to MRU)
- _peer_metadata_set(canon, value) — write + evict-if-over-maxsize
- All production reads/writes route through the helpers
- _peer_metadata_lock guards the OrderedDict ops so concurrent
background-enrichment workers (#2484) don't race the LRU
invariant
Why hand-rolled vs cachetools:
- No new dep. workspace/ has 0 cache libraries today; adding one
for ~30 lines is negative leverage.
- The TTL is enforced at the call site (existing pattern); only
the size cap + LRU is new. cachetools.TTLCache fuses the two,
which would force a refactor of every caller's TTL check.
- The size + lock are simple enough that a future swap-in of
cachetools is mechanical if needs evolve.
Why maxsize matters more than ttl (issue's framing):
A runaway poller that touches new peer_ids every push would still
grow within a single TTL window — TTL eviction only fires at
read time. The size cap fires immediately on insert, regardless
of read pattern.
Three new tests:
- test_peer_metadata_set_evicts_lru_when_at_maxsize
- test_peer_metadata_get_promotes_to_lru_head
- test_peer_metadata_set_replaces_existing_entry_in_place
1742 passed / 0 failed locally (78 new + 1664 existing).
Closes#2482.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The inbox poller's notification callback called the synchronous
enrich_peer_metadata on every push, blocking the poller for up to
2s × N uncached peers per poll batch. Push delivery latency was
gated on registry RTT — exactly what PR #2471's negative-cache patch
was trying to avoid amplifying.
Fix: cache-first nonblocking path with a tiny background worker pool.
enrich_peer_metadata_nonblocking(peer_id):
- Cache hit (fresh, within TTL): return cached record immediately
- Cache miss / stale: return None, schedule background
fetch via ThreadPoolExecutor
The first push from a new peer arrives metadata-light (bare peer_id);
the next push within the 5-min TTL hits the warm cache and gets full
name/role. Acceptable trade-off because the channel-envelope
enrichment is a UX nicety, not a correctness invariant — and the
cold-cache window per peer is bounded to one push.
Defenses:
- In-flight gate (_enrich_in_flight) — N concurrent pushes for the
same uncached peer schedule exactly ONE worker, not N. Without
this, a chatty peer's first burst of pushes would amplify into
parallel registry GETs — the exact DoS-on-self pattern the
negative cache was meant to rate-limit.
- Lazy executor init — most test fixtures + short-lived CLI
invocations never need it; only the long-running molecule-mcp
path actually fires background work.
- Daemon-style threads via thread_name_prefix; executor never
blocks process exit.
Tests:
- test_enrich_peer_metadata_nonblocking_cache_hit_returns_immediately
- test_enrich_peer_metadata_nonblocking_cache_miss_schedules_fetch
- test_enrich_peer_metadata_nonblocking_coalesces_duplicate_pushes
- test_enrich_peer_metadata_nonblocking_invalid_peer_id_returns_none
Plus updates to the existing test_envelope_enrichment_* suite that
asserted synchronous behavior — they now drain the in-flight set via
_wait_for_enrichment_inflight_for_testing before checking cache state.
Existing synchronous enrich_peer_metadata is unchanged — Phase B (#2790)
schema↔dispatcher drift gate + the negative-cache contract from PR
#2471 still apply. The nonblocking variant is purely additive.
1739 passed, 0 failed locally.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Last open #2799 site. Pause's per-workspace stop call now routes
through StopWorkspaceAuto, removing the final inline if-cpProv-else
(actually if-h.provisioner) dispatch from workspace_restart.go's
restart/pause/resume code paths.
Pre-2026-05-05 the Pause loop was:
if h.provisioner != nil {
h.provisioner.Stop(ctx, ws.id)
}
Same drift class as #2813 (team-collapse leak) + #2814 (workspace
delete leak) — Docker-only stop silently no-ops on SaaS, leaving
the EC2 running while the workspace row gets marked paused. Orphan
sweeper would catch it eventually but the leak window is real.
Pause-specific bookkeeping (mark paused, clear workspace keys,
broadcast WORKSPACE_PAUSED) stays inline in the handler; only the
"stop the running workload" step delegates. StopWorkspaceAuto's
no-backend → no-op semantics match the pre-fix behavior on
misconfigured deployments (the bookkeeping still runs).
One new source-level pin:
TestPauseHandler_UsesStopWorkspaceAuto — gates regression to the
inline dispatch shape.
This closes#2799 Phase 3. After this PR + #2847 (Phase 2 PR-B) land,
workspace_restart.go has no remaining inline if-cpProv-else dispatch
in any user-facing code path. The remaining direct backend calls
inside the file are in stopForRestart and cpStopWithRetry — both
internal helpers that ARE the dispatcher's underlying primitives,
not new bypasses.
Note: scope was originally tagged "Phase 3 needs PauseWorkspaceAuto
verb" in the audit on PR #2843. On closer reading Pause's stop step
is identical to Stop — only the bookkeeping is Pause-specific. Reusing
StopWorkspaceAuto avoids unnecessary surface and keeps the dispatcher
trio (provision/stop/restart) tight.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
runRestartCycle's auto-restart cycle (Site 4 from PR #2843's audit)
needs synchronous provision dispatch — the outer pending-flag loop
in RestartByID relies on returning when the new container is up so
the next restart cycle doesn't race the in-flight provision goroutine
on its Stop call.
Phase 1's provisionWorkspaceAuto wraps each per-backend body in
`go func() {...}()` — wrong shape for runRestartCycle's needs. This
PR introduces provisionWorkspaceAutoSync as a behavioral mirror that
runs in the current goroutine instead.
Two helpers, kept identical except for the wrapper:
provisionWorkspaceAuto: spawns goroutine, returns immediately
provisionWorkspaceAutoSync: blocks until per-backend body returns
Same backend-selection (CP first, Docker second) + no-backend
mark-failed fallback. When one grows a new arm (third backend, retry
semantics), the other should too — pinned in the docstring.
Site 4 (runRestartCycle) was the only call site that needs sync today.
Migrating it removes the last bare if-cpProv-else dispatch in the
restart code path's provision half.
Three new tests:
- TestProvisionWorkspaceAutoSync_RoutesToCPWhenSet
- TestProvisionWorkspaceAutoSync_NoBackendMarksFailed
- TestRunRestartCycle_UsesProvisionWorkspaceAutoSync (source-level pin)
Out of scope (last open #2799 site):
Phase 3 — Site 5 (Pause loop). PAUSE doesn't reprovision; needs a
new PauseWorkspaceAuto verb. After this PR lands, Pause is the only
inline if-cpProv-else dispatch left in workspace_restart.go.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Sites 1+2 (Restart HTTP handler goroutine) and Site 3 (Resume HTTP
handler goroutine) now route through RestartWorkspaceAutoOpts /
provisionWorkspaceAuto instead of inlining the if-cpProv-else dispatch.
Three changes:
1. **RestartWorkspaceAutoOpts** — new variant of RestartWorkspaceAuto
that carries the resetClaudeSession Docker-only flag (issue #12).
The bare RestartWorkspaceAuto still exists as a wrapper that calls
Opts with false. CP path silently ignores the flag (each EC2 boots
fresh — no session state to clear). Mirrors the Provision pair
(provisionWorkspace / provisionWorkspaceOpts).
2. **Restart handler (Site 1+2)** — the inline goroutine
`if h.provisioner != nil { Stop } else if h.cpProv != nil { ... }`
collapses to `RestartWorkspaceAutoOpts(...)`. Pre-fix the dispatch
was Docker-FIRST ordering (a different drift class from the
silent-drop bugs PRs #2811/#2824 closed); the dispatcher enforces
CP-FIRST.
3. **Resume handler (Site 3)** — Resume is provision-only (workspace
is paused, no live container), so it routes through
provisionWorkspaceAuto, not RestartWorkspaceAuto. Inline
if-cpProv-else dispatch removed.
Two new source-level pins:
- TestRestartHandler_UsesRestartWorkspaceAuto
- TestResumeHandler_UsesProvisionWorkspaceAuto
These prevent regression to the inline dispatch pattern.
Out of scope (tracked under #2799):
- Site 4 (runRestartCycle) — synchronous coordination model needs
a different shape than the fire-and-return dispatchers. PR-B.
- Site 5 (Pause loop) — PAUSE doesn't reprovision, needs a new
PauseWorkspaceAuto verb. Phase 3.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Activates the server-side foundation that PRs #2832, #2836, #2837
shipped without wiring (each PR landed dead code on purpose so the
review surface stayed tight).
## What this PR wires up
1. router.go — registers the RFC #2829 PR-4 admin endpoints behind
AdminAuth:
GET /admin/delegations[?status=...&limit=N]
GET /admin/delegations/stats
2. cmd/server/main.go — starts the RFC #2829 PR-3 stuck-task
sweeper as a supervised goroutine alongside the existing
scheduler + hibernation-monitor + image-auto-refresh:
go supervised.RunWithRecover(ctx, "delegation-sweeper",
delegSweeper.Start)
## What this PR does NOT do
- PR-2's DELEGATION_RESULT_INBOX_PUSH flag stays default off — flip
happens via env config in a follow-up after staging burn-in.
- PR-5's DELEGATION_SYNC_VIA_INBOX flag stays default off — same
reason. The two flags are independent; either can be flipped in
isolation.
- Canvas operator panel UI: this PR exposes the JSON contract; the
canvas panel consumes it in a separate canvas PR.
## Coverage
2 new router gate tests in admin_delegations_route_test.go:
- List endpoint requires AdminAuth (unauthenticated → 401)
- Stats endpoint requires AdminAuth (unauthenticated → 401)
Pattern mirrors admin_test_token_route_test.go (the IDOR-fix gate
for PR #112). Catches a future router refactor that silently drops
AdminAuth — operator dashboard data exposes caller_id, callee_id, and
task_preview, none of which should reach unauthenticated callers.
Sweeper boots as a no-op until at least one delegation row exists,
so this PR is safe to land before PR-5's agent-side cutover sees
production traffic.
Refs RFC #2829.
Behind feature flag DELEGATION_SYNC_VIA_INBOX (default off). When set,
tool_delegate_task no longer holds an HTTP message/send connection
through the platform proxy waiting for the callee's reply. Instead:
1. POST /workspaces/<src>/delegate (returns 202 + delegation_id)
— platform's executeDelegation goroutine handles A2A dispatch
in the background. No client-side timeout dependency on the
platform holding a connection open.
2. Poll GET /workspaces/<src>/delegations every 3s for a row with
matching delegation_id reaching terminal status (completed/failed).
3. Return the response_preview text on completed; surface the
wrapped _A2A_ERROR_PREFIX error on failed (so caller error
detection stays unchanged).
This closes the bug class that broke Hongming's home hermes on
2026-05-05 ("message/send queued but result not available after 600s
timeout" while the callee was actively heartbeating "iteration 14/90").
## Compatibility
Default-off feature flag — flag-off path is byte-identical to the
legacy send_a2a_message behavior, pinned by
TestFlagOffLegacyPath::test_flag_off_uses_send_a2a_message_not_polling.
Idempotency-key derivation matches tool_delegate_task_async (SHA-256
of source:target:task) so a restart-mid-delegation gets the same key
and the platform returns the existing delegation_id.
## Recovery on timeout
If the polling budget (DELEGATION_TIMEOUT, default 300s) elapses
without a terminal status, the error message includes the
delegation_id + a "call check_task_status('<id>') to retrieve later"
hint. The platform's durable row is still live — work is NOT lost,
just the synchronous wait is over. Caller can poll for the result
later via the existing check_task_status tool.
## Stack with PR-2
PR-2 added the SERVER-SIDE result-push to the caller's a2a_receive
inbox row. PR-5 (this PR) adds the AGENT-SIDE cutover. Together they
remove the proxy-blocked sync path entirely. PR-2 default-off keeps
existing behavior; PR-5 default-off keeps existing behavior. Operators
flip both for full effect after staging burn-in.
## Coverage
9 unit tests:
- flag off → byte-identical to legacy (send_a2a_message called,
_delegate_sync_via_polling NOT called)
- dispatch HTTP exception → wrapped error
- dispatch non-2xx → wrapped error mentioning HTTP code
- dispatch missing delegation_id → wrapped error
- completed first poll → response_preview returned
- failed status → wrapped error with error_detail
- transient poll error → keeps polling, eventually succeeds
- deadline exceeded → wrapped timeout error mentions delegation_id +
check_task_status hint for recovery
- filters by delegation_id (other delegations' rows ignored)
All passing locally. CI will run the same suite on a clean env.
Refs RFC #2829.