Two docs covering load-bearing patterns from today's work that
weren't previously discoverable:
1. workspace/platform_tools/README.md — explains the ToolSpec
single-source-of-truth pattern (#2240), the CLI-block alignment
gap that hand-maintained generation can't close (#2258), the
snapshot golden files + LF-pinning (#2260), and the add/rename/
remove playbook. The next reader who lands in
workspace/platform_tools/ now has the design rationale + the
safe-edit procedure colocated with the code.
2. scripts/README.md — disambiguates the three measure-coordinator-
task-bounds.sh files that now exist across two repos:
- scripts/measure-coordinator-task-bounds.sh (canonical OSS, this repo)
- scripts/measure-coordinator-task-bounds-runner.sh (Hermes/MiniMax variant, this repo)
- scripts/measure-coordinator-task-bounds.sh (production-shape, in molecule-controlplane)
Cross-references reference_harness_pair_pattern (auto-memory) for
the cross-repo design rationale. Documents the common safety
pattern (cleanup trap, DRY_RUN, non-target guard,
cleanup_*_failed events) and the heartbeat-trace caveat.
Refs: #2240, #2254, #2257, #2258, #2259, #2260; molecule-controlplane#321.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two follow-ups from the #2240 code review:
1. Snapshot tests for the rendered tool-instruction blocks. The
structural tests added in #2240 guarantee tool NAMES are present;
these new tests pin the SHAPE — bullet ordering, heading style,
footer placement — so a future contributor who reorders fields in
`_render_section` or rewrites a `when_to_use` paragraph sees the
diff in CI rather than shipping a silently-different system prompt.
Golden files live under workspace/tests/snapshots/.
2. CLI-block alignment test + corrected source-of-truth comment.
`_A2A_INSTRUCTIONS_CLI` is a separate hand-maintained surface for
ollama and other non-MCP runtimes — the registry can't auto-generate
it because the CLI subprocess interface uses different command
shapes (`peers` vs `list_peers`, etc.). A new
`_CLI_A2A_COMMAND_KEYWORDS` mapping declares the registry-tool →
CLI-keyword correspondence (or explicit `None` for tools not
exposed via subprocess). Two tests enforce coverage:
- every a2a tool in the registry is keyed in the mapping
- every non-None subcommand keyword literally appears in
`_A2A_INSTRUCTIONS_CLI`
Caught one real gap: `send_message_to_user` is in the registry but
has no CLI subcommand. Mapped to `None` with an explanatory comment.
The "no other source of truth" claim in registry.py's docstring
was wrong post-#2240 (the CLI block survived) — corrected to
describe the two surfaces explicitly and point at the alignment
tests as the gate.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds structured `rfc2251_phase=...` log lines at the deterministic phase
boundaries inside route_task_to_team and check_task_status, so an
operator running scripts/measure-coordinator-task-bounds.sh against
staging can correlate the harness's external timing trace with what
phase the coordinator was in at any given second.
The harness already exists in staging and measures end-to-end response
time + heartbeat trace. What it CAN'T do without this PR is answer
"the coordinator response took 7 minutes — was it stuck delegating, or
stuck polling children, or stuck synthesizing after all children
returned?" The phase logs answer that question.
Phases instrumented (deterministic Python boundaries, no agent prompt
involvement):
route_start → enter route_task_to_team
children_fetched → after get_children() returns
routing_decided → after build_team_routing_payload
delegate_invoked → just before delegate_task_async.ainvoke
delegate_returned → after delegate_task_async returns
check_status → every check_task_status poll (per-poll)
route_returning_decision_only → fall-through path
Each line includes elapsed_ms from route_start so per-phase durations
are extractable via:
grep rfc2251_phase= <container.log> \
| awk '{...}' to compute deltas between consecutive phases
The synthesis phase (after all children return, before agent emits
final A2A response) is NOT instrumented here because it's
agent-driven (no deterministic Python boundary). The harness operator
infers synthesis_secs = total_response_secs − max(check_status_ts).
This is reproduction-harness scaffolding; it adds zero behavior. Strip
the rfc2251_phase log lines when V1.0 ships and the phase data lands
in the structured heartbeat payload instead.
Refs:
- RFC: molecule-core#2251
- Harness: scripts/measure-coordinator-task-bounds.sh (shipped earlier)
- V1.0 gate: this is deliverable #2 of the four pre-V1.0 gates
The PR-built wheel + import smoke gate refused the platform_tools
package because it's a new subdirectory under workspace/ that wasn't
in scripts/build_runtime_package.py:SUBPACKAGES. The drift gate (which
exists for exactly this reason) caught it cleanly:
error: SUBPACKAGES drifted from workspace/ subdirectories:
in workspace/ but NOT in SUBPACKAGES (will ship un-rewritten or
be excluded): ['platform_tools']
Adding platform_tools to SUBPACKAGES wires the package into the
runtime wheel + applies the canonical
from platform_tools.<x> -> from molecule_runtime.platform_tools.<x>
import-rewrite step that every other subpackage uses.
Verified locally: scripts/build_runtime_package.py succeeds, the
rewritten a2a_mcp_server.py reads
from molecule_runtime.platform_tools.registry import TOOLS
which matches the package layout in the wheel.
Establishes workspace/platform_tools/registry.py as THE place tool
naming and docs live. Every consumer reads from it; nothing duplicates
the source. Closes the architectural gap behind the doc/tool drift
discussion 2026-04-28 — adding hundreds of future runtime SDK adapters
should not require touching tool names anywhere except the registry.
What the registry owns
ToolSpec dataclass with: name, short (one-line description), when_to_use
(multi-paragraph agent-facing usage guidance), input_schema (JSON Schema),
impl (the actual coroutine in a2a_tools.py), section ('a2a' | 'memory').
TOOLS list with 8 entries — delegate_task, delegate_task_async,
check_task_status, list_peers, get_workspace_info, send_message_to_user,
commit_memory, recall_memory.
What now reads from the registry
- workspace/a2a_mcp_server.py
The hardcoded TOOLS list (167 lines of hand-maintained dicts) is
gone. Replaced with a 6-line list comprehension over the registry.
MCP description = spec.short. inputSchema = spec.input_schema.
- workspace/executor_helpers.py
get_a2a_instructions(mcp=True) and get_hma_instructions() now
GENERATE the agent-facing system-prompt text from the registry.
Heading + per-tool bullet (spec.short) + per-tool when_to_use +
a section-specific footer. No more hand-maintained instruction
blocks that drift from reality.
- workspace/builtin_tools/delegation.py
Renamed delegate_to_workspace -> delegate_task_async to match
registry. check_delegation_status -> check_task_status. Added
sync delegate_task @tool wrapping a2a_tools.tool_delegate_task
(was missing for LangChain runtimes — CP review Issue 3).
- workspace/builtin_tools/memory.py
Renamed search_memory -> recall_memory to match registry.
- workspace/adapter_base.py, workspace/main.py
Bundle all 7 core tools (was 6) into all_tools / base_tools.
- workspace/coordinator.py, shared_runtime.py, policies/routing.py
Updated system-prompt-text references to use the registry names.
Structural alignment tests
workspace/tests/test_platform_tools.py — 9 tests pin every
registry-to-adapter mapping:
- registry names are unique
- a2a + memory partition is complete (no orphans)
- by_name lookup works
- MCP server registers exactly the registry's tool set
- MCP description equals registry.short for every tool
- MCP inputSchema equals registry.input_schema for every tool
- get_a2a_instructions text contains every a2a tool name
- get_hma_instructions text contains every memory tool name
- pre-rename names (delegate_to_workspace, search_memory,
check_delegation_status) cannot leak back
Adding a future tool means adding one ToolSpec; the test failure
list tells the author exactly which adapter to update.
Adapter pattern for future SDK support
When (e.g.) AutoGen or Pydantic AI gets adapters, the only work
needed for tool surfacing is "wrap registry.TOOLS in your SDK's
tool format." Names, descriptions, schemas, impl come from the
registry — adapter author writes zero strings.
Why this needed to ship now
PR #2237 (already in staging) injected MCP-world docs as the
default system-prompt content. Without the registry, those docs
said "delegate_task" while LangChain runtimes only had
"delegate_to_workspace" — workers see docs for tools that don't
exist (CP review Issue 1+3). PR #2239 was a tactical rename;
this PR is the structural fix that prevents the same class of
drift from recurring as new adapters ship.
PR #2239 was closed in favor of this — same renames, plus the
registry, plus structural tests. Single coherent change.
Tests: 1232 pass, 2 xfailed (pre-existing). 9 new in
test_platform_tools.py; 4 alignment tests in test_prompt.py from
#2237 still pass; original test_executor_helpers tests adapted to
the registry-driven world.
Refs: CP review Issues 1, 2, 3, 5; project memory
project_runtime_native_pluggable.md (platform owns A2A);
project memory feedback_doc_tool_alignment.md (this is the structural
fix for the tactical lesson).
Workers were registering platform tools (delegate_task, delegate_task_async,
list_peers, check_task_status, send_message_to_user, commit_memory,
recall_memory) but the build_system_prompt assembly never included
documentation for any of them. The instruction-text functions
get_a2a_instructions() and get_hma_instructions() exist in
executor_helpers.py and have unit tests, but were not called from any
production code path — workers received system-prompt.md content only
and saw the tools as bare names with no usage guidance.
Symptom: agents called commit_memory and delegate_task without knowing
they were platform tools. They worked when the agent guessed the API
correctly and silently failed when the agent didn't.
Fix: build_system_prompt() now appends both instruction sets between
the Skills section and the Peers section. The placement is intentional —
A2A docs explain how to call delegate_task; the peer list is the data
that delegate_task operates over, so the docs precede the peer table.
New parameter `a2a_mcp: bool = True` lets adapters opt into the CLI
subprocess variant of the A2A instructions for runtimes without MCP
support (ollama, custom CLI runtimes). Default True covers the
MCP-capable majority (claude-code, hermes, langchain, crewai). Adapter
callers don't need to change unless they specifically need CLI mode.
Tests: 4 new regression tests in test_prompt.py pin
- A2A MCP variant injection (default)
- A2A CLI variant injection (a2a_mcp=False, with MCP-only fields absent)
- HMA instruction injection
- A2A docs precede peer list ordering
Full suite green: 1223 passed, 2 xfailed.
Consolidates 11 of the 17 open Dependabot PRs (#2215, #2217, #2219-#2225,
#2227, #2229) into one PR. Every entry is a patch / minor / floor bump
where the impact surface is small and CI carries the proof.
Same pattern as the 2026-04-15 batch.
Go (workspace-server/go.mod + go.sum, regenerated via `go mod tidy`):
- golang.org/x/crypto 0.49.0 → 0.50.0 (#2225)
- github.com/golang-jwt/jwt/v5 5.2.2 → 5.3.1 (#2222)
- github.com/gin-contrib/cors 1.7.2 → 1.7.7 (#2220)
- github.com/docker/go-connections 0.6.0 → 0.7.0 (#2223)
- github.com/redis/go-redis/v9 9.7.3 → 9.19.0 (#2217)
Python floor bumps (workspace/requirements.txt; current pip-resolved
versions don't change unless they happen to be below the new floor):
- httpx >=0.27 → >=0.28.1 (#2221)
- uvicorn >=0.30 → >=0.46 (#2229)
- temporalio >=1.7 → >=1.26 (#2227)
- websockets >=12 → >=16 (#2224)
- opentelemetry-sdk >=1.24 → >=1.41.1 (#2219)
GitHub Actions (SHA-pinned per existing convention):
- dorny/paths-filter@d1c1ffe (v3) → @fbd0ab8 (v4.0.1) (#2215)
REMOVED from this batch (lockfile platform mismatch):
- #2231 @types/node ^22 → ^25.6 (npm install on macOS strips
Linux-only @emnapi/* entries from package-lock.json that CI's
`npm ci` then refuses; needs a Linux-side install to land cleanly)
- #2230 jsdom ^25 → ^29.1 (same)
NOT included in this batch (deferred to per-PR human review):
- #2228 github/codeql-action v3 → v4 (CodeQL CLI alignment risk)
- #2218 actions/setup-node v4 → v6 (default Node version drift)
- #2216 actions/upload-artifact v4 → v7 (3 major versions)
- #2214 actions/setup-python v5 → v6 (action major)
NOT merged (CI failing on dependabot's own PR):
- #2233 next 15 → 16
- #2232 tailwindcss 3 → 4
- #2226 typescript 5 → 6
Verified:
- workspace-server: `go mod tidy && go build ./... && go test ./...` — green
- workspace requirements.txt: floor bumps only
The previous assertion `'Silent Agent' not in result` was pinning
the buggy behavior — peers without an agent_card were silently
dropped from the prompt. With the fallback to DB name+role those
peers are correctly visible. Flip the assertion so the test pins
the new (correct) rendering and would catch a regression to the
silent-drop behavior.
Bug: a Design Director coordinator with 6 freshly-created worker peers
rendered an empty `## Your Peers` section in its system prompt — the
hosting registry endpoint correctly returned all 6 peers, but
`summarize_peer_cards()` silently dropped every entry whose
`agent_card` column was null (the default until A2A discovery has
run end-to-end against the worker). The coordinator then refused to
delegate any task because "no peers exist".
Fix: fall back to the registry row's `name` and `role` columns when
`agent_card` is missing, malformed, or wrong-typed, instead of
skipping the peer. The registry endpoint
(`workspace-server/internal/handlers/discovery.go:queryPeerMaps`) has
always returned both fields — they were just being thrown away on
the consumer side. `build_peer_section()` now renders `Role: …` when
the agent_card-derived skill list is empty so the coordinator's
prompt still has something concrete to delegate against.
Also hoists `import json` out of the per-peer loop body to module
level (was previously imported once per iteration).
Tests: new `test_shared_runtime_peer_summary.py` pins all four
fallback cases (null / malformed string / wrong type / null + no
DB name) plus the agent-card-present happy path and the mixed-list
case the coordinator actually consumes. First peer-summary test
coverage `shared_runtime.py` has had — no prior tests existed.
Refs: 2026-04-27 Design Director discovery report from infra team.
The initial-prompt readiness probe in workspace/main.py hardcoded the
pre-1.x well-known path. After the a2a-sdk 1.x bump the SDK started
mounting the agent card at the new canonical path (the value of
`a2a.utils.constants.AGENT_CARD_WELL_KNOWN_PATH`), so the probe
returned 404 every attempt and silently fell through to "server not
ready after 30s, skipping". Net effect: every workspace silently
dropped its `initial_prompt` from config.yaml — the agent never sent
the kickoff self-message, and users hit a fresh chat with no context.
Reported by an external user as "/.well-known/agent.json 404 — the
a2a-sdk agent card route was not being mounted at the expected path".
The route IS mounted; the probe was looking at the wrong place.
Fix imports `AGENT_CARD_WELL_KNOWN_PATH` from `a2a.utils.constants`
and uses it directly in the probe URL — the SDK constant is now the
single source of truth, so any future rename travels through
automatically.
Adds two static regression tests pinning the invariant:
1. No hardcoded `/.well-known/agent.json` literal anywhere in
main.py.
2. The probe URL fstring interpolates AGENT_CARD_WELL_KNOWN_PATH
(catches a "fix" that imports the constant for show but reverts
to a literal in the actual GET).
Verified manually inside ghcr.io/molecule-ai/workspace-template-langgraph
that AGENT_CARD_WELL_KNOWN_PATH == '/.well-known/agent-card.json' and
that `create_agent_card_routes(card)` mounts at exactly that path —
constant + mount are aligned in the runtime image, so the probe will
now find the server.
Full workspace test suite: 1209 passed, 2 xfailed.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Three different intermittent failures observed during a single
manual-test session — RemoteProtocolError, ReadTimeout, ConnectError —
each surfaced as a "Failed to deliver to <peer>" error chip in the
canvas Agent Comms panel even though the next attempt would have
succeeded (verified by direct probes from the same source workspace
to the same peer). The error message even told the user "Usually a
transient network blip — retry once," but it left the retry to a
human reading the error message.
Auto-retry inside send_a2a_message itself: up to 5 attempts (1
initial + 4 retries) with exponential backoff (1s, 2s, 4s, 8s,
16s-capped), each backoff jittered ±25% to break sync across
siblings. Cumulative wall-clock capped at 600s by
_DELEGATE_TOTAL_BUDGET_S so a string of 5×300s ReadTimeouts can't
make the caller wait 25 minutes — once the deadline elapses, retries
stop even if attempts remain.
Retry only on transport-layer transients:
- ConnectError / ConnectTimeout (peer's listening socket not ready)
- RemoteProtocolError (peer closed TCP without writing — observed
when a peer's prior in-flight Claude SDK session aborted)
- ReadError / WriteError (network blip on Docker bridge)
- ReadTimeout (peer wrote no response in 300s)
Application-level errors are NOT retried — they're deterministic and
retrying just wastes wall-clock:
- HTTP 4xx (peer rejected the request format)
- JSON parse failures (peer returned garbage)
- JSON-RPC error in response body (peer's runtime errored cleanly)
- Programmer-bug exceptions (ValueError, etc.)
8 new tests pin the contract:
- retry succeeds after 2 RemoteProtocolErrors
- retry succeeds after 1 ConnectError
- all 5 attempts fail → returns formatted last-error
- capped at exactly _DELEGATE_MAX_ATTEMPTS (regression cover for
"did someone bump the constant accidentally?")
- JSON-RPC error response NOT retried (1 attempt only)
- non-httpx exception NOT retried (programmer bugs stay loud)
- total budget caps the loop even if attempts remain
- backoff schedule grows exponentially with ±25% jitter
Refactor: extracted _format_a2a_error() so the success and exhausted
paths share one error-formatting routine. _delegate_backoff_seconds()
is a pure function so the schedule is unit-testable without monkey-
patching asyncio.sleep.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Manual-test failure surfaced what was hidden behind the MCP-path bug:
once delegate_task could actually fire, every cross-workspace call
came back as JSON-RPC -32600 "Invalid Request" with the underlying
pydantic ValidationError:
params.message.role
Input should be 'agent' or 'user' [type=enum,
input_value='ROLE_USER', input_type=str]
PR #2184's a2a-sdk 1.x migration sweep over-corrected: it changed
every `"role": "user"` literal in JSON-RPC payload construction to
`"role": "ROLE_USER"` to match the protobuf enum names of the 1.x
native types (a2a.types.Role.ROLE_USER / ROLE_AGENT). That was
correct for in-process Message construction (which the SDK
serialises before wire transmission) but WRONG for the 8 sites that
hand-build JSON-RPC payloads. The workspace's own a2a-sdk runs
inbound requests through the v0.3 compat adapter
(/usr/local/lib/python3.11/site-packages/a2a/compat/v0_3/) because
main.py sets enable_v0_3_compat=True for backwards compatibility,
and that adapter validates against the v0.3 Pydantic Role enum
(`agent` | `user` lowercase). The protobuf-style names blow it up.
Reverted the 8 wire-payload sites to lowercase:
- workspace/a2a_client.py:74
- workspace/a2a_cli.py:74, 111
- workspace/heartbeat.py:378
- workspace/main.py:464, 563
- workspace/builtin_tools/a2a_tools.py:60
- workspace/builtin_tools/delegation.py:272
Native-type usage at workspace/a2a_executor.py:471 (`Role.ROLE_AGENT`)
stays — that's an in-process Message construction; the SDK handles
wire serialisation correctly.
Updated the misleading comment at main.py:255-257 (which said
"outbound payloads are now 1.x-shaped (ROLE_USER)") to spell out
the actual rule: outbound JSON-RPC wire payloads MUST use v0.3
shape, native types are only for in-process construction.
New regression test test_jsonrpc_wire_role_format.py greps the 6
wire-payload-emitting files for any "ROLE_USER" / "ROLE_AGENT"
string literal and fails loud — cheapest possible drift detector.
Why E2E missed it: the priority-runtimes harness sends a single
message canvas → workspace, but the canvas already used lowercase
"user" (it never went through the migration sweep). The bug only
surfaces on workspace → workspace delegation, which the harness
doesn't exercise. Same gap as #131 (extend smoke to call main()
against a stub).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Pre-existing test_set_status_exception_prints_to_stderr asserted on the
legacy "molecule-monorepo-status: failed to update" prefix string. The
prior commit renamed it to "molecule_ai_status: failed to update" so
the printed label matches the canonical module-form invocation
(`python3 -m molecule_runtime.molecule_ai_status`) instead of a shell
alias that only ever existed in the dev-only base image. Updating the
expected substring in lockstep.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Comprehensive sweep follow-up to the MCP server path fix. Audited every
/app/ reference in the runtime source against the live claude-code
template image and confirmed the actual /app/ contents post-#87 are
ONLY: __init__.py, adapter.py, claude_sdk_executor.py, requirements.txt
— every other workspace module ships in the wheel under
site-packages/molecule_runtime/. Two more leaks found:
1. executor_helpers.py:_A2A_INSTRUCTIONS_CLI — inter-agent system prompt
for non-MCP runtimes (Ollama, custom) had 5 lines telling the model
`python3 /app/a2a_cli.py X`. Models copy these examples verbatim, so
every CLI-runtime delegation would fail at the shell layer (no such
file). Replaced with `python3 -m molecule_runtime.a2a_cli` form,
which works regardless of where the wheel is installed.
2. molecule_ai_status.py docstring — usage examples invoked
`python3 /app/molecule_ai_status.py` and claimed a
`molecule-monorepo-status` shell alias. Both broken in current
templates: the file's at site-packages, and `which
molecule-monorepo-status` errors (the legacy symlink only existed
in the dev-only workspace/Dockerfile base image, not in the
standalone template Dockerfiles that ship to production).
Updated docstring + the __main__ usage banner + the stderr error
prefix to use the same `python3 -m molecule_runtime.X` form.
Plugins audited and clean: WORKSPACE_PLUGINS_DIR=/configs/plugins,
SHARED_PLUGINS_DIR=$PLUGINS_DIR fallback /plugins. No /app/
assumptions.
Regression test: `test_a2a_cli_instructions_use_module_invocation_not_legacy_app_path`
asserts the legacy /app/a2a_cli.py path can't drift back into the CLI
system prompt and that the canonical module form is present.
The legacy workspace/Dockerfile + workspace/entrypoint.sh + workspace/scripts/
still contain /app/-shaped paths but are dev-only base-image scaffolding
(per workspace/build-all.sh's own header comment) — not shipped to the
standalone template images. Out of scope here; can be cleaned up in a
separate dead-code pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
DEFAULT_MCP_SERVER_PATH was hardcoded to /app/a2a_mcp_server.py, which
was correct under the pre-#87 monolithic-template Docker layout where
the workspace/ tree was COPY'd into /app/. After the universal-runtime
refactor (#87, #117), workspace modules ship inside the
molecule-ai-workspace-runtime wheel under
site-packages/molecule_runtime/, while /app/ now holds only
template-specific files (adapter.py + the runtime-native executor for
that template).
Net effect: in every workspace built since the wheel cutover, Claude
Code SDK's mcp_servers={"a2a": {"command": python, "args":
["/app/a2a_mcp_server.py"]}} pointed at a missing file. The subprocess
launch failed silently, the SDK registered zero MCP tools, and the
agent's list_peers / delegate_task / a2a_send_message / a2a_send_signal
all disappeared. Symptom observed today: Design Director said
"I tried to reach the perf auditor via the inter-agent MCP tools
(list_peers, delegate_task) but those tools didn't resolve in this
environment" and fell back to running the audit itself with WebFetch.
Why this slipped through E2E: the priority-runtimes harness sends a
single message and verifies a reply — it does not exercise inter-agent
delegation, so the missing MCP tools are invisible at that layer.
Fix: resolve the path relative to executor_helpers.py via __file__,
which tracks wherever the wheel is installed (site-packages today,
anywhere else tomorrow). The A2A_MCP_SERVER_PATH env override is
preserved for tests / non-default layouts.
Regression test: assert os.path.exists(DEFAULT_MCP_SERVER_PATH) so
any future move of a2a_mcp_server.py out of the package directory
fails at unit-test time instead of silently disabling delegation in
production.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Audited every a2a-sdk surface in workspace/ against the installed
1.0.2 wheel. Found and fixed:
main.py (the live workspace startup path):
• create_jsonrpc_routes(rpc_url='/', enable_v0_3_compat=True) —
rpc_url required in 1.x; v0.3 compat enables inbound legacy
clients (`"role": "user"` lowercase) without forcing them to
upgrade. Pairs with the outbound rename below.
a2a_executor.py:
• TextPart/FilePart/FileWithUri removed in 1.x. Part is now a
flat proto message: Part(text=…) / Part(url=…, filename=…,
media_type=…). Updated the file-attachment branch (only
reachable when an agent emits files; the harness's PONG path
didn't exercise this, but it's a latent crash).
• Message field names: messageId/taskId/contextId →
message_id/task_id/context_id (proto3 snake_case).
• Role enum: Role.agent → Role.ROLE_AGENT (proto enum).
Outbound JSON-RPC payloads (8 files):
• "role": "user" → "role": "ROLE_USER" — proto3 JSON serialization
is strict about enum values. Sites: a2a_client, a2a_cli, main
(initial+idle prompts), heartbeat, builtin_tools/a2a_tools,
builtin_tools/delegation. Wire JSON keys stay camelCase
(proto3 default), only the role enum value changed.
google-adk/adapter.py:
• new_agent_text_message → new_text_message (4 sites). This
adapter's directory has a hyphen, so it can't be imported as a
Python module — effectively dead code, but the wheel ships the
file and a future fix should keep it correct against 1.x.
Why one PR instead of seven: every previous a2a-sdk migration find
landed as its own publish → cascade → harness → next-bug cycle.
Today's audit ran every a2a-sdk symbol/type/method in workspace/
against the installed 1.0.2 wheel in a single sweep + tested the
critical paths (Message construction, Part construction, Role enum
parsing) against the actual SDK. Should be the last migration PR.
Verified locally:
python3 scripts/build_runtime_package.py --version 0.1.99 \
--out /tmp/build-final
pip install /tmp/build-final
python -c "import molecule_runtime.main; \
from molecule_runtime.a2a_executor import LangGraphA2AExecutor"
→ ✓ all imports clean against a2a-sdk 1.0.2
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
7th a2a-sdk migration find from the v0 → v1 transition.
create_jsonrpc_routes() now requires rpc_url as a positional arg
(was implicit at root in 0.x). Pass '/' to match
a2a.utils.constants.DEFAULT_RPC_URL — that's also what
workspace-server's a2a_proxy.go forwards to (POSTs to workspace URL
without appending a path).
Symptom before fix: every workspace startup crashed with
TypeError: create_jsonrpc_routes() missing 1 required positional
argument: 'rpc_url'
Caught by harness 9 phase 4 (claude-code + langgraph both on
0.1.24). The user's "use langgraph for fast iteration" call cut
the diagnose cycle from 15min to ~30s — without that, this would
have taken another hermes round-trip to surface.
Updated reference_a2a_sdk_v0_to_v1_migration.md memory with this
entry alongside the previous 6 finds.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
a2a-sdk 1.x added agent_card as a required argument to
DefaultRequestHandler.__init__. main.py constructed it with only
agent_executor + task_store, so every workspace startup that reached
the handler init step crashed with:
TypeError: DefaultRequestHandlerV2.__init__() missing 1 required
positional argument: 'agent_card'
This is the 6th a2a-sdk migration find from the v0 → v1 transition
(see reference_a2a_sdk_v0_to_v1_migration memory). Pattern is the
same: SDK exposes a new required arg, our call site needs to pass
the existing object we already construct upstream.
Why the import-only smoke gates didn't catch this: it's a call-time
constructor error inside `async def main()`, not a module load
error. The runtime-pin-compat smoke imports main_sync but doesn't
invoke main() against a real config. Worth filing a follow-up to
extend the smoke to a "construct + dispose" cycle.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
CRITICAL: every workspace boot since the a2a-sdk 1.0 migration (#1974)
has been crashing at AgentCard construction with:
ValueError: Protocol message AgentCard has no "supported_protocols" field
The protobuf field is `supported_interfaces` (plural, interfaces — see
a2a-sdk types/a2a_pb2.pyi:189). The 0.3→1.0 migration left the kwarg
as `supported_protocols`, which doesn't exist in the 1.0 schema, so
the constructor raises before any subsequent line of main runs.
Why this hid for so long:
- publish-runtime.yml's smoke step only IMPORTED molecule_runtime.main;
importing the module is fine, only CONSTRUCTING the AgentCard fails
- The user-visible symptom is "Workspace failed: " with empty
last_sample_error, indistinguishable from generic boot timeouts
- The state_transition_history=True bug (fixed in #2179) was a
sibling of this — same migration, same class, just caught first
Fix is symmetric with #2179:
1. workspace/main.py: rename the kwarg + comment explaining why
2. .github/workflows/publish-runtime.yml: extend the smoke block to
instantiate AgentCard with the exact production call shape, so
the next field-rename of this class fails at publish time
instead of breaking every workspace startup
Verification:
- Constructed AgentCard against fresh a2a-sdk 1.0.2 in a clean
venv with the corrected kwarg → succeeds
- Constructed it with the original `supported_protocols` kwarg →
fails immediately with the exact error production sees
- Smoke test pinned to mirror main.py's exact call shape; main.py
+ smoke must stay in lockstep going forward
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds a comment block citing a2a-sdk's own
a2a/compat/v0_3/conversions.py, which says verbatim:
state_transition_history=None, # No longer supported in v1.0
So a future reader who notices the missing kwarg won't try to add it
back. The capability is now universal: every v1.x Task carries a
history list and tasks/get supports historyLength via the
apply_history_length helper. No flag because nothing's optional.
Confirmed by reading the SDK source directly:
- a2a/types.py AgentCapabilities exposes only: streaming,
push_notifications, extensions, extended_agent_card.
- a2a/compat/v0_3/conversions.py explicitly maps None when
down-converting v1 → v0.3 (deliberate removal, not rename).
- a2a/server/request_handlers/default_request_handler_v2.py uses
apply_history_length(task, params) — agent doesn't opt in.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
a2a-sdk 1.x's AgentCapabilities only exposes 4 fields:
streaming, push_notifications, extensions, extended_agent_card.
The state_transition_history field was removed in the v1 protobuf
schema. main.py still passed it as a kwarg, so every workspace
that reached the AgentCard construction step (line 188) crashed:
ValueError: Protocol message AgentCapabilities has no
"state_transition_history" field
Symptom: every claude-code + hermes workspace stuck in `provisioning`
forever — caught when the user provisioned a Design Director crew
manually via the canvas while harness 5 was running.
Why every prior smoke gate missed it:
- runtime-pin-compat.yml smokes `from molecule_runtime.main import
main_sync` — only imports the module. AgentCapabilities() runs
inside `async def main()`, not at module load.
- Template image boot smoke does `import every /app/*.py` — same
story. main.py imports fine; the field error only fires at call.
The fix is one line — drop the kwarg. Fields we actually need
(streaming + push_notifications) are still passed.
Follow-up worth filing: smoke step that instantiates Adapter() +
calls a no-op setup() against a stub config. That would have
caught this before publish.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The conftest mock only exposed `new_agent_text_message`, the pre-v1
name. After fixing a2a_executor.py to use the v1 name
`new_text_message`, the mock didn't satisfy the import → CI red.
Mock both names (aliased to the same lambda) so any in-flight test
that still references the old name keeps working until the next
sweep removes those references.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
a2a-sdk v1 renamed `new_agent_text_message` → `new_text_message`
(role=Role.agent is now the default). Same fix landed in the hermes
template earlier today; this is the runtime-side equivalent.
NOT dead code: a2a_executor.py is the LangGraph A2A executor, used by
the langgraph + deepagents templates. Both templates currently import
it via bare `from a2a_executor import LangGraphA2AExecutor` — which is
a separate bug in those templates, filed/fixed separately.
Symptom in a2a_executor.py form: any langgraph or deepagents workspace
that calls create_executor crashes with `ImportError: cannot import
name 'new_agent_text_message' from 'a2a.helpers'`. Doesn't surface for
claude-code or hermes (their templates use their own executors and
don't load a2a_executor).
Five call sites updated, one import line, one comment. Test suite
already passes against the new symbol — `python -c "from
molecule_runtime.a2a_executor import LangGraphA2AExecutor"` resolves
cleanly after this change.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The wheel's pyproject.toml has declared
`molecule-runtime = "molecule_runtime.main:main_sync"` since the
publish pipeline was created on 2026-04-26, but the function
itself was never present in workspace/main.py — it lived in the
pre-monorepo molecule-ai-workspace-runtime repo and was lost
during the consolidation that made workspace/ the source of truth.
The 0.1.15 wheel still had main_sync from a leftover snapshot,
so the regression went unnoticed until 0.1.16 (the first wheel
built from the new source-of-truth) shipped. Symptom: every
workspace container restart loops with
ImportError: cannot import name 'main_sync' from 'molecule_runtime.main'
— the molecule-runtime CLI script's first line tries to import
the missing symbol. Workspaces stay in `provisioning` until the
10-min sweep marks them failed.
Caught by .github/workflows/runtime-pin-compat.yml, which already
imports the symbol by name as its smoke test. (That check kept
failing red on every recent merge_group run; this PR fixes the
underlying symbol-not-found instead of the smoke step.)
Also strengthens publish-runtime.yml's wheel smoke from
`import molecule_runtime.main` (loads the module — passes even
when entry-point target is missing) to `from molecule_runtime.main
import main_sync` (the actual contract the CLI script needs).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The runtime-compat change in this branch added a `current_runtime`
kwarg to load_skills(); the watcher passes it through. Test mocks
that pre-date the kwarg signature broke with TypeError, which the
watcher's reload-error try/except swallowed — the symptom was empty
callback lists, not a clear failure.
Switching the fakes to accept **kwargs keeps them forward-compat for
future load_skills additions without another test churn.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
SKILL.md frontmatter can now declare `runtime: [claude-code]` or
`runtime: [hermes, claude-code]` to opt out of incompatible adapters
instead of failing at first invocation. Default `["*"]` means universal —
existing skill libraries need zero migration.
Borrowed from hermes' declarative skill-compat pattern surfaced in the
hermes architecture survey. The remaining two patterns (event-log
layer, observability config block) stay open under #119.
Wiring:
- SkillMetadata.runtime: list[str] = ["*"]
- _normalize_runtime_field accepts list, string-sugar, missing -> ["*"];
malformed warns and falls back to universal so a typo never silently
drops a skill.
- load_skills(..., current_runtime=...) filters out skills whose runtime
list lacks "*" or current_runtime, with an INFO log line.
- BaseAdapter.start passes type(self).name() so the live adapter drives
the filter; SkillsWatcher takes the same kwarg so hot-reload honors it.
8 new tests cover default universal, no-field universal, explicit
match/mismatch, string sugar, wildcard short-circuit, current_runtime=None
(preserves old behavior), and malformed-warns-not-drops.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
DRAFT — do NOT merge until gemini-cli template image rebuilds with
its local cli_executor.py copy (template PR #9 just merged at
07:59 UTC; image build kicks off now).
Final adapter-specific deletion from molecule-runtime, completing #87
for the priority adapters (claude-code via PR #2156, plus gemini-cli
via this PR + template #9).
Deletes:
- workspace/cli_executor.py (461 LOC) — CLIAgentExecutor + the
RUNTIME_PRESETS dict for codex / ollama / gemini-cli. The file
moved to molecule-ai-workspace-template-gemini-cli (PR #9, merged).
- workspace/tests/test_agent_base_urls.py — only consumer of
CLIAgentExecutor in the test suite. Tests for the executor
behavior live in the template repo now.
Updates:
- workspace/tests/test_executor_helpers.py — docstring refresh:
executor_helpers.py is the runtime-agnostic shared helpers; the
executor classes themselves live in template repos post-#87.
Codex / ollama presets disappear naturally with the file. They never
had template repos, so no production path could invoke them anyway —
this is dead-code removal as a side effect of the move.
Verified-safe-to-delete:
- heartbeat.py: doesn't import cli_executor
- claude_sdk_executor.py: deleted by PR #2156 (in flight)
- preflight.py: only references runtime names by string; no import
- main.py: doesn't import cli_executor (uses adapter discovery via
ADAPTER_MODULE; the template's adapter constructs the executor)
- Only test_agent_base_urls.py + test_executor_helpers.py docstring
referenced cli_executor
Verification:
- 1249/1249 workspace pytest pass (was 1251; -2 = test_agent_base_urls.py
cases — exact match)
- No live import of cli_executor anywhere in molecule-core after deletion
(grep verified)
Sequencing:
1. ✅ Template PR #9 (gemini-cli local copy) — MERGED
2. ⏳ Template image rebuild — running
3. THIS PR — wait until image is published, then mark ready-for-review
Closes#87 for the priority adapters: workspace/ is now adapter-
agnostic except for adapter discovery (ADAPTER_MODULE) + the
runtime_wedge primitive.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Root-cause fix for #118 (chat attachments rendering as plain text links
instead of download chips). User flagged with screenshot 2026-04-26
showing the Design Director agent pasting https://files.catbox.moe/…
in the message body — chat rendered the URL as plain markdown text,
unclickable in the canvas's bubble layout, and unreachable in any SaaS
deployment where the user's browser can't egress to catbox.
The structured `attachments` field already exists, the canvas's
AttachmentChip already renders well, the WebSocket broadcast already
carries attachments verbatim — the missing piece was the LLM choosing
the body over the structured field. Tighten the tool description so it
trains the right behavior.
Three targeted strengthenings:
1. Top-level tool description: enumerated use case (4) now reads
"via the `attachments` field (NEVER paste file URLs in `message`)".
The all-caps NEVER + the explicit field name move the LLM toward
the structured path on first read.
2. `message` param: adds an explicit DO NOT rule with rationale.
Includes the SaaS-reachability reason so operators can grep for
"SaaS" and find this design constraint instead of re-discovering it
after a tenant complaint. Calls out catbox.moe + file:// by name as
concrete examples of forbidden hosts (those are the two we've seen
in production).
3. `attachments` param: leads with REQUIRED, lists the bad
alternatives explicitly (pasting URLs, base64-encoding, telling
user to look at a path). LLMs handle "use X, NOT Y" framings
better than "use X" alone — observed during prompt-engineering
iteration on hermes' tool descriptions.
Tests pin all three load-bearing phrases (4 new in test_a2a_mcp_server.py)
so a future doc edit that softens or drops them fails CI. Brittle by
design — these are prompt-engineering invariants, not implementation
details.
This is the root-cause fix. A defensive canvas-side backstop (auto-
detect download-shaped URLs in body and convert to chips) is a
follow-up that could land separately if the steering proves
insufficient in practice.
Verification:
- 1190/1190 workspace pytest pass
- 4 new test_a2a_mcp_server.py cases all green
Closes the steering half of #118. The structured-attachments-only
contract was already enforced server-side (PR #2130 added per-attachment
validation); this PR closes the prompt-side gap.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase 2 of the universal-runtime refactor (task #87). Now that the
claude-code template repo ships its own claude_sdk_executor.py
(template PR #13 merged + image rebuilt at 07:36 UTC) the
molecule-runtime no longer needs to ship the file.
Deletes:
- workspace/claude_sdk_executor.py (704 LOC)
- workspace/tests/test_claude_sdk_executor.py (~1.6K LOC)
Updates:
- workspace/runtime_wedge.py — drops the "Compatibility shim" docstring
section. The shim was time-bounded ("removed once #87 Phase 2 lands");
this is that PR.
- workspace/tests/test_runtime_wedge.py — drops the
TestClaudeSdkExecutorReExportShim test class (the shim doesn't
exist anymore so the identity assertions would fail at import).
- workspace/tests/conftest.py — drops the claude_agent_sdk stub.
Its only consumer was test_claude_sdk_executor.py which is gone;
no other test imports the SDK.
- workspace/cli_executor.py — comment refresh: claude-code template
repo (not workspace/) is now the home for ClaudeSDKExecutor.
Verified-safe-to-delete:
- heartbeat.py: migrated to runtime_wedge in PR #2154 (no longer
imports from claude_sdk_executor)
- cli_executor.py: only comments referenced claude_sdk_executor;
its line-117 ValueError defends against accidental routing
- tests: only test_claude_sdk_executor.py + test_runtime_wedge.py's
shim class consumed the deleted module; both removed in this PR
Verification:
- 1182/1182 workspace pytest pass (was 1251; -69 = exactly the
deleted test cases — zero unexpected regressions)
- No live import of claude_sdk_executor anywhere in molecule-core
after deletion (grep verified)
Closes#87 for the claude-code adapter. Hermes is already template-only.
The remaining adapter-specific code in workspace/ is cli_executor.py
(codex/ollama/gemini-cli) tracked by task #122. preflight.py's
SUPPORTED_RUNTIMES static list is tracked by task #123 (PR #2155 in
flight).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes task #123 — last piece of #87 cleanup.
Pre-fix: workspace/preflight.py:11 hardcoded a tuple of "supported"
runtime names (claude-code, codex, ollama, langgraph, etc.). Every
new template repo required a code change in molecule-runtime to be
recognized — direct violation of the universal-runtime principle
(#87) where adapters declare themselves and the runtime stays generic.
Post-fix: discovery-based validation via the same ADAPTER_MODULE env
var that production load paths already consult
(workspace/adapters/__init__.py:get_adapter). Distinguished failure
modes so operator messages are concrete:
- ADAPTER_MODULE unset → "no adapter installed; set the env var"
- ADAPTER_MODULE set but module won't import → import error type +
message
- module imports but no Adapter class → "convention violation, add
`Adapter = YourClass`"
- Adapter.name() raises → caught with operator message
- Adapter.name() returns non-string → contract violation message
- Adapter.name() doesn't match config.runtime → drift WARNING (not
fatal; the adapter wins in production, config.yaml is just
documentation)
The drift case is the one behavioral change worth calling out: the
prior static-list path would have hard-failed config.runtime values
not in the allowlist. With discovery, an unknown runtime in
config.yaml is just a documentation drift — the adapter that's
actually installed runs regardless. Operator gets a warning naming
both the configured and installed names so they can fix whichever
is stale.
Tests:
- Replaces the obsolete "static list pass/fail" tests with 6 new
cases covering each distinguished failure mode, plus a positive
test for the adapter-matches-config happy path
- Adds an autouse `_default_langgraph_adapter` fixture that
pre-installs a fake adapter via sys.modules monkey-patching, so
existing tests building default WorkspaceConfig (runtime="langgraph")
inherit a valid adapter without each test setting ADAPTER_MODULE
- Failure-mode tests opt out of the default fixture via
@pytest.mark.no_default_adapter (registered in pytest.ini)
- Sentinel pattern (`_UNSET = object()`) for `name_returns` so None
is a passable test value (otherwise `is not None` would skip the
None branch — exact bug the sentinel avoids)
Verification:
- 22/22 preflight tests pass (was 16; +6 new failure-path tests)
- 1256/1256 workspace pytest pass (was 1251; +5 net)
- No production code path other than preflight changed
Source: 2026-04-27 #87 cleanup audit after PR #2154 (wedge extraction).
This change is independent of the cli_executor.py template moves
(task #122) — completes one of the two remaining cleanup items.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Addresses github-code-quality unused-import flag on the runtime_wedge
re-export shim. Adds __all__ listing the names that exist purely for
backwards-compat (is_wedged / wedge_reason / _reset_sdk_wedge_for_test)
so static analysis recognizes the imports as deliberate exports.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Three changes from /code-review-and-quality on PR #2154:
1. Optional (architecture): wrap state in a private _WedgeState class
instead of bare module-level globals. Public API (mark_wedged /
clear_wedge / is_wedged / wedge_reason / reset_for_test) is
unchanged — adapters never see the class. The class is forward-cover
for any future per-scope variant (multiple executors per process, a
keyed registry, etc.) without churning the call sites. Today there's
exactly one instance (_DEFAULT) so behavior is identical.
2. Optional (readability): clarify the import path in the integration
recipe — in a TEMPLATE repo it's `from molecule_runtime.runtime_wedge`
(PyPI package); in molecule-core itself it's `from runtime_wedge`
(top-level module). Removes the trap where a contributor reading the
docstring while editing in-repo copies the template-style import and
gets ImportError.
3. Nit (readability): dedupe the shim rationale. claude_sdk_executor's
re-export comment now points to runtime_wedge's "Compatibility shim"
section as the source of truth instead of restating the same content.
Avoids docs-in-two-places drift risk.
Verification:
- 1251/1251 workspace pytest pass (no behavior change — class wrap
is pure plumbing; module-level helpers delegate to the singleton)
- All shim re-export identity tests still pass (the shim's
`is_wedged is runtime_wedge.is_wedged` assertion holds because we
re-export the SAME function object that delegates to _DEFAULT)
No new tests needed — the existing test suite covers the public API
contract; the class is an implementation detail behind that contract.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Doc-only follow-up to the wedge-state extraction. Adds proactive
guidance so the next adapter (hermes / codex / langgraph / a future
template) discovers the runtime_wedge primitive and integrates the
~6 LOC pattern uniformly instead of inventing its own wedge state.
Two additions:
- workspace/runtime_wedge.py — new "How to use from a NEW adapter"
section in the module docstring with the minimum viable
integration recipe, what-you-get-for-free list, and explicit
DON'TS (don't store local wedge state, don't mark for transient
errors, don't write your own clear logic). Plus a "when wedge is
the WRONG primitive" note to keep adopters from over-using it.
- workspace/adapter_base.py — adds runtime_wedge to the
"Cross-cutting capabilities your adapter can opt into" list in
BaseAdapter's docstring (alongside capabilities() and
idle_timeout_override()). Discoverability path: adapter author
reads BaseAdapter docstring → sees runtime_wedge mention → reads
runtime_wedge module docstring → has the recipe.
Also tightens the "to add a new agent infra" steps in BaseAdapter to
match the actual current model (standalone template repo + ADAPTER_MODULE
env var) rather than the obsolete workspace/adapters/<infra>/ layout
that hasn't been the path since the universal-runtime extraction
started.
Zero code change. Tests untouched (1251/1251 still pass).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Prerequisite for the universal-runtime refactor (task #87) to move
claude_sdk_executor.py out of molecule-runtime into the claude-code
template repo. heartbeat.py had a hard import:
from claude_sdk_executor import is_wedged, wedge_reason
which would break the moment the executor moves out of the runtime
package — the heartbeat would lose access to the wedge state used to
flip workspace status to degraded.
Extract the wedge state to a runtime-side module that the heartbeat
can keep importing regardless of which adapter executor is wedged:
- workspace/runtime_wedge.py — single-flag state + mark_wedged /
clear_wedge / is_wedged / wedge_reason / reset_for_test. Same
semantics as the original claude_sdk_executor implementation
(sticky first-write-wins, auto-clear on observed success). 100
LOC of pure stateless helpers; lock-free ok because there's one
executor per workspace process today.
- workspace/claude_sdk_executor.py — drops the in-file definitions;
re-exports the same names from runtime_wedge as a backwards-compat
shim. Any third-party adapter that imported is_wedged / wedge_reason
/ _mark_sdk_wedged from claude_sdk_executor keeps working for one
release cycle while they migrate to runtime_wedge.
- workspace/heartbeat.py — _runtime_state_payload() now imports
from runtime_wedge instead of claude_sdk_executor. Lazy-import
pattern preserved; the docstring updated to explain the new
cross-cutting source-of-truth.
Tests (10 new in test_runtime_wedge.py):
- Default state (unwedged), mark sets flag, first-write-wins,
clear restores healthy, clear-when-not-wedged is no-op,
re-marking after clear is allowed
- Re-export shim: each old name in claude_sdk_executor IS the
runtime_wedge function (identity check), state is shared
(marking via the executor shim is observable via runtime_wedge
and vice versa)
Verification:
- 1251/1251 workspace pytest pass (was 1241 after orphan deletion;
+10 = exactly the new test_runtime_wedge.py cases)
- All existing test_claude_sdk_executor.py cases (which call
_mark_sdk_wedged via the shim) still pass
After this lands + the claude-code template image rebuilds with the
local claude_sdk_executor.py copy (template PR #13), the molecule-
core deletion of workspace/claude_sdk_executor.py becomes safe (the
shim deletion comes alongside the file deletion, since runtime_wedge
is the new public API).
See project memory `project_runtime_native_pluggable.md`.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Removes:
- workspace/hermes_executor.py (545 LOC) — HermesA2AExecutor, an
OpenAI-compat direct-call executor that was the original hermes
integration before the template was rewritten to bridge to
hermes-agent's sidecar API server.
- workspace/tests/test_hermes_executor.py (1307 LOC) — its test file.
Verified-dead-code analysis:
- Zero `from hermes_executor` / `import hermes_executor` imports
anywhere in workspace/, workspace-server/, or
workspace-configs-templates/ (excluding the file itself + its test).
- The hermes template (workspace-configs-templates/hermes/executor.py)
uses HermesAgentProxyExecutor, NOT HermesA2AExecutor — they're
independent implementations. The executor.py file imports from
`executor` (local), not from molecule_runtime.
- Last touched in PR #1974 (2026 a2a-sdk migration to 1.0.0) for SDK
compatibility — kept compiling but never wired into any code path.
- Older than that, only the 2026 open-source restructure rename.
Why now: starting task #87 (universal-runtime violation, move adapter-
specific code out of workspace/). Dead-code deletion is the safest
first step and motivates the broader refactor by clearing the
landscape — no risk of someone defending HermesA2AExecutor as
"actually used somewhere."
Verification:
- 1241/1241 workspace pytest pass (was 1312; the 71 dropped tests
are exactly test_hermes_executor.py's coverage)
- No new failures, no broken imports anywhere
The remaining adapter-specific executors in workspace/ that #87 will
eventually relocate (per the user's scope: claude-code + hermes priority,
others later):
- workspace/claude_sdk_executor.py (757 LOC) → claude-code template repo
- workspace/cli_executor.py (461 LOC) → defer (codex/ollama/etc still
use the runtime presets here; comes back later when those bump versions)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Three small wins from the hermes-agent design survey, bundled because
each is too small for its own PR but they all improve the priority
adapters (claude-code + hermes) immediately.
1. Hermes-style cap on telemetry fields, applied INSIDE report_activity
so every caller benefits without remembering. error_detail capped at
4096 (hermes' value); summary capped at 256 (one-liner ceiling). The
existing call site in tool_delegate_task already truncated error_detail
at 4096, but moving the cap into the helper closes the door on a
future caller pasting a giant traceback. response_text is NOT capped
(it's the agent's user-visible reply; truncating would silently drop
content). Pinned by 4 new tests including a negative-pin that
response_text MUST stay untruncated.
2. Sharper MCP tool descriptions for commit_memory + recall_memory —
hermes' delegate_task description literally says "WAIT for the response"
and delegate_task_async says "Returns immediately." LLMs pick the
right tool variant from descriptions; ambiguity costs accuracy.
- commit_memory now states it APPENDS (each call creates a row, no
overwrite) and that GLOBAL requires tier 0.
- recall_memory now states it's case-insensitive substring search
with no pagination, returns all matches, and that empty-query is
cheap and safer than a narrow keyword.
3. (no code change) Filed task #120 for the bigger user-flow win — a
per-workspace tool enable/disable menu in Canvas Config — and task
#121 for model-string passthrough (depends on #87 universal-runtime
refactor).
Verification:
- 1312/1312 Python pytest pass (was 1308, +4 new)
See task #119 for the architectural follow-ups (event-log layer,
declarative skill compat, observability config block) and project
memory `project_runtime_native_pluggable.md`.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Reviewer bot flagged: import was leftover from earlier scaffolding —
all test fixtures use sys.modules monkey-patching with SimpleNamespace
instead. Drop to unblock merge. Tests still 5/5 pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Capability primitive #2 (task #117). The first cross-cutting capability
where the adapter actually displaces platform behavior — claude-code's
streaming session can legitimately go silent for 8+ minutes during
synthesis + slow tool calls; the platform's hardcoded 5min idle timer
in a2a_proxy.go cancels it mid-flight (the bug PR #2128 patched at
the env-var layer). This PR fixes it at the right layer: the adapter
declares "I need 600s" and the platform's dispatch path honors it.
Wire shape (Python → Go):
POST /registry/heartbeat
{
"workspace_id": "...",
...
"runtime_metadata": {
"capabilities": {"heartbeat": false, "scheduler": false, ...},
"idle_timeout_seconds": 600 // optional, omitted = use default
}
}
Default behavior preserved: any adapter that doesn't override
BaseAdapter.idle_timeout_override() (returns None by default) sends
no idle_timeout_seconds field; the Go side falls through to
idleTimeoutDuration (env A2A_IDLE_TIMEOUT_SECONDS, default 5min).
Existing langgraph / crewai / deepagents workspaces are unaffected.
Components:
Python:
- adapter_base.py: idle_timeout_override() method on BaseAdapter
returning None (the platform-default sentinel).
- heartbeat.py: _runtime_metadata_payload() lazy-imports the active
adapter and assembles the capability + override block. Try/except
swallows ANY error so heartbeat never breaks because of capability
discovery — observability outranks capability accuracy.
Go:
- models.HeartbeatPayload.RuntimeMetadata (pointer so absent =
"old runtime, didn't say"; explicit zero-cap = "new runtime,
declared no native ownership").
- handlers.runtimeOverrides: in-memory sync.Map cache keyed by
workspaceID. Populated by the heartbeat handler, consulted on
every dispatchA2A. Reset on platform restart (worst-case 30s of
platform-default behavior — acceptable; nothing about overrides
is correctness-critical).
- a2a_proxy.dispatchA2A: looks up the override before applyIdle
Timeout; falls through to global default when absent.
Tests:
Python (17, all new):
- RuntimeCapabilities dataclass shape (frozen, defaults, wire keys)
- BaseAdapter.capabilities() default + override + sibling isolation
- idle_timeout_override default, positive override, dropped-override
- Heartbeat metadata producer: default adapter emits all-False,
native adapter emits flag + override, missing ADAPTER_MODULE
returns {} (graceful), zero/negative override is omitted from
wire, exception inside adapter swallowed
Go (6, all new):
- SetIdleTimeout + IdleTimeout round-trip
- Zero/negative duration clears the override
- Empty workspace_id ignored
- Replacement (heartbeat overwrites prior value)
- Reset clears entire cache
- Concurrent reads + writes (sync.Map invariant)
Verification:
- 1308 / 1308 workspace pytest pass (was 1300, +8)
- All Go handlers tests pass (6 new + existing)
- go vet clean
See project memory `project_runtime_native_pluggable.md` for the
architecture principle this implements.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Foundation primitive for the native+pluggable runtime principle (task
#117, blocks #87). Lets each adapter declare which cross-cutting
capabilities it owns natively (heartbeat, scheduler, durable session,
status mgmt, retry, activity decoration, channel dispatch) versus
delegates to the platform's fallback implementation.
Pure additive: every existing adapter inherits BaseAdapter.capabilities()
which returns RuntimeCapabilities() — every flag False — so today's
"platform owns everything" behavior is preserved exactly. Subsequent
PRs land platform-side consumers (idle-timeout override, scheduler
skip, status-transition hook, etc.) one capability at a time.
Why a frozen dataclass instead of class attributes: capabilities are
declared at class-load time and read by the platform on every heartbeat.
A mutable value would let a runtime change capabilities mid-flight,
creating impossible-to-debug state where the platform's idea of who-
owns-heartbeat drifts from the adapter's actual code.
Why a `to_dict()` with explicit short keys: the Go side will read these
from the heartbeat payload by string key. The dict's wire names are
pinned independently of Python field names so a Python-side rename
doesn't silently break the Go consumer (test pins this).
Tests (9 new):
- is a frozen dataclass (mutation rejected)
- all 7 default flags are False (load-bearing — flipping any default
silently moves ownership for langgraph/crewai/deepagents)
- to_dict() keys are stable wire names (Go contract)
- BaseAdapter.capabilities() default returns all-False
- subclass override mechanism works
- sibling adapters' defaults aren't affected by an override
Verification:
- 1300/1300 workspace pytest pass (was 1291, +9)
- Zero behavior change for any existing code path
See project memory `project_runtime_native_pluggable.md`.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two Critical bugs caught in code review of the agent→user attachments PR:
1. **Empty-URI attachments slipped past validation.** Gin's
go-playground/validator does NOT iterate slice elements without
`dive` — verified zero `dive` usage anywhere in workspace-server —
so the inner `binding:"required"` tags on NotifyAttachment.URI/Name
were never enforced. `attachments: [{"uri":"","name":""}]` would
pass validation, broadcast empty-URI chips that render blank in
canvas, AND persist them in activity_logs for every page reload to
re-render. Added explicit per-element validation in Notify (returns
400 with `attachment[i]: uri and name are required`) plus
defence-in-depth in the canvas filter (rejects empty strings, not
just non-strings).
3-case regression test pins the rejection.
2. **Hardcoded application/octet-stream stripped real mime types.**
`_upload_chat_files` always passed octet-stream as the multipart
Content-Type. chat_files.go:Upload reads `fh.Header.Get("Content-Type")`
FIRST and only falls back to extension-sniffing when the header is
empty, so every agent-attached file lost its real type forever —
broke the canvas's MIME-based icon/preview logic. Now sniff via
`mimetypes.guess_type(path)` and only fall back to octet-stream
when sniffing returns None.
Plus three Required nits:
- `sqlmockArgMatcher` was misleading — the closure always returned
true after capture, identical to `sqlmock.AnyArg()` semantics, but
named like a custom matcher. Renamed to `sqlmockCaptureArg(*string)`
so the intent (capture for post-call inspection, not validate via
driver-callback) is unambiguous.
- Test asserted notify call by `await_args_list[1]` index — fragile
to any future _upload_chat_files refactor that adds a pre-flight
POST. Now filter call list by URL suffix `/notify` and assert
exactly one match.
- Added `TestNotify_RejectsAttachmentWithEmptyURIOrName` (3 cases)
covering empty-uri, empty-name, both-empty so the Critical fix
stays defended.
Deferred to follow-up:
- ORDER BY tiebreaker for same-millisecond notifies — pre-existing
risk, not regression.
- Streaming multipart upload — bounded by the platform's 50MB total
cap so RAM ceiling is fixed; switch to streaming if cap rises.
- Symlink rejection — agent UID can already read whatever its
filesystem perms allow via the shell tool; rejecting symlinks
doesn't materially shrink the attack surface.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the gap where the Director would say "ZIP is ready at /tmp/foo.zip"
in plain text instead of attaching a download chip — the runtime literally
had no API for outbound file attachments. The canvas + platform's
chat-uploads infrastructure already supported the inbound (user → agent)
direction (commit 94d9331c); this PR wires the outbound side.
End-to-end shape:
agent: send_message_to_user("Done!", attachments=["/tmp/build.zip"])
↓ runtime
POST /workspaces/<self>/chat/uploads (multipart)
↓ platform
/workspace/.molecule/chat-uploads/<uuid>-build.zip
→ returns {uri: workspace:/...build.zip, name, mimeType, size}
↓ runtime
POST /workspaces/<self>/notify
{message: "Done!", attachments: [{uri, name, mimeType, size}]}
↓ platform
Broadcasts AGENT_MESSAGE with attachments + persists to activity_logs
with response_body = {result: "Done!", parts: [{kind:file, file:{...}}]}
↓ canvas
WS push: canvas-events.ts adds attachments to agentMessages queue
Reload: ChatTab.loadMessagesFromDB → extractFilesFromTask sees parts[]
Either path → ChatTab renders download chip via existing path
Files changed:
workspace-server/internal/handlers/activity.go
- NotifyAttachment struct {URI, Name, MimeType, Size}
- Notify body accepts attachments[], broadcasts in payload,
persists as response_body.parts[].kind="file"
canvas/src/store/canvas-events.ts
- AGENT_MESSAGE handler reads payload.attachments, type-validates
each entry, attaches to agentMessages queue
- Skips empty events (was: skipped only when content empty)
workspace/a2a_tools.py
- tool_send_message_to_user(message, attachments=[paths])
- New _upload_chat_files helper: opens each path, multipart POSTs
to /chat/uploads, returns the platform's metadata
- Fail-fast on missing file / upload error — never sends a notify
with a half-rendered attachment chip
workspace/a2a_mcp_server.py
- inputSchema declares attachments param so claude-code SDK
surfaces it to the model
- Defensive filter on the dispatch path (drops non-string entries
if the model sends a malformed payload)
Tests:
- 4 new Python: success path, missing file, upload 5xx, no-attach
backwards compat
- 1 new Go: Notify-with-attachments persists parts[] in
response_body so chat reload reconstructs the chip
Why /tmp paths work even though they're outside the canvas's allowed
roots: the runtime tool reads the bytes locally and re-uploads through
/chat/uploads, which lands the file under /workspace (an allowed root).
The agent can specify any readable path.
Does NOT include: agent → agent file transfer. Different design problem
(cross-workspace download auth: peer would need a credential to call
sender's /chat/download). Tracked as a follow-up under task #114.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Critical follow-up to PR #2126's review. Two real bugs:
1. **Runtime QUEUED never resolved.** Platform's drain stitch updates
the platform's delegate_result row when a queued delegation finally
completes, but never pushes back to the runtime. The LLM polling
check_delegation_status saw status="queued" forever — combined with
the new docstring guidance ("queued → wait, peer will reply"), the
model would wait indefinitely on a state that never resolves.
Strictly worse than pre-PR behavior where it would have at least
bypassed.
2. **Live updates dead code.** delegation.go writes activity rows by
direct INSERT INTO activity_logs, bypassing the LogActivity helper
that fires ACTIVITY_LOGGED. Adding "delegation" to the canvas's
ACTIVITY_LOGGED filter (PR #2126 first cut) was inert — initial
GET worked, live updates did not.
Fix:
(1) Runtime side, workspace/builtin_tools/delegation.py:
- New `_refresh_queued_from_platform(task_id)` async helper that
pulls /workspaces/<self>/delegations and finds the platform-side
delegate_result row for our task_id.
- check_delegation_status calls _refresh when local status is
QUEUED, so the LLM's poll itself drives state convergence.
- Best-effort: GET failure leaves local state untouched, next
poll retries.
- Docstring updated to reflect the actual behavior ("polls
transparently — keep polling and you'll see the flip").
- 4 new tests cover: QUEUED → completed via refresh; QUEUED →
failed via refresh; refresh keeps QUEUED when platform hasn't
resolved; refresh swallows network errors safely.
(2) Canvas side, AgentCommsPanel.tsx WS push handler:
- Listens for DELEGATION_SENT / DELEGATION_STATUS / DELEGATION_COMPLETE
/ DELEGATION_FAILED in addition to ACTIVITY_LOGGED.
- Each event's payload synthesized into an ActivityEntry shape
so toCommMessage's existing delegation branch maps it. Status
derived: STATUS uses payload.status, COMPLETE → "completed",
FAILED → "failed", SENT → "pending".
- The ACTIVITY_LOGGED branch keeps the "delegation" type accepted
as a no-op-today / future-proof path: if delegation handlers
are ever refactored to call LogActivity, this lights up
automatically without another canvas change.
Doesn't change: the docstring guidance ("queued → wait, don't bypass")
is now actually load-bearing because the refresh path will deliver
the eventual outcome. Without the refresh, the guidance was a trap.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two bugs that compounded into the "Director does the work itself" UX:
1. workspace/builtin_tools/delegation.py: _execute_delegation only
handled HTTP 200 in the response branch. When the peer's a2a-proxy
returned HTTP 202 + {queued: true} (single-SDK-session bottleneck
on the peer), the loop fell through. Two iterations later the
`if "error" in result` check tried to access an unbound `result`,
the goroutine ended quietly, and the delegation stayed at FAILED
with error="None". The LLM checking status saw "failed" + the
platform's "Delegation queued — target at capacity" log line in
chat context, concluded the peer was permanently unavailable, and
bypassed delegation to do the work itself.
Fix: explicit 202+queued branch. Adds DelegationStatus.QUEUED,
marks the local delegation as QUEUED, mirrors to the platform,
and returns cleanly without retrying. The retry loop is for
transient transport errors — queueing is a real ack, not a failure
to retry against (retrying would just re-queue the same task).
check_delegation_status docstring extended with explicit per-status
guidance: pending/in_progress → wait, queued → wait (peer busy on
prior task, reply WILL arrive), completed → use result, failed →
real error in error field; only fall back on failed, never queued.
2. canvas/src/components/tabs/chat/AgentCommsPanel.tsx: filter dropped
every delegation row because it whitelisted only a2a_send /
a2a_receive. activity_type='delegation' rows (written by the
platform's /delegate handler with method='delegate' or
'delegate_result') never reached toCommMessage. User saw "No
agent-to-agent communications yet" while 6+ delegations existed
in the DB.
Fix: include "delegation" in the both the initial filter and the
WS push filter, plus a delegation branch in toCommMessage that
maps the row as outbound (always — platform proxies on our behalf)
and uses summary as the primary text source.
Tests:
- 3 new Python tests cover the 202+queued path: status becomes
QUEUED not FAILED; no retry on queued (counted by URL match
against the A2A target since the mock is shared across all
AsyncClient calls); bare 202 without {queued:true} still
falls through to the existing retry-then-FAILED path.
- 3 new TS tests cover the delegation mapper: 'delegate' row
maps as outbound to target with summary text; queued
'delegate_result' preserves status='queued' (load-bearing for
the LLM's wait-vs-bypass decision); missing target_id returns
null instead of rendering a ghost.
Does NOT solve: the underlying single-SDK-session bottleneck that
causes peers to queue in the first place. Tracked as task #102
(parallel SDK sessions per workspace) — real architectural work.
This PR makes the runtime handle the queueing correctly so the LLM
doesn't bail out, and makes the delegations visible in Agent Comms
so operators can see what's happening.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The Copilot Auto-fix in 5a8f42b4 addressed the duplicate-import lint by
removing 'import claude_sdk_executor as _executor_mod' entirely, but the
async wedge tests (test_execute_marks_wedge_*, test_execute_clears_wedge_*)
still call _executor_mod._reset_sdk_wedge_for_test() etc. — so they failed
with NameError once that line was removed.
Restore the alias, but at the top of the file (alongside the other module-
level imports) rather than at line 1248. The late-file binding was the
proximate cause of the original CI failure: with --cov enabled (#1817),
sys.settrace + the @pytest.mark.asyncio wrapper combination caused the
late module-level binding to not be visible from inside the async test
bodies, even though the binding existed at module-load time. Hoisting
fixes that scope-resolution issue.
Verified locally with the exact CI config (--cov-fail-under=86):
1280 passed, 2 xfailed — total coverage 90.25%
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Three files conflicted with staging changes that landed while this PR
sat open. Resolved each by combining both intents (not picking one side):
- a2a_proxy.go: keep the branch's idle-timeout signature
(workspaceID parameter + comment) AND apply staging's #1483 SSRF
defense-in-depth check at the top of dispatchA2A. Type-assert
h.broadcaster (now an EventEmitter interface per staging) back to
*Broadcaster for applyIdleTimeout's SubscribeSSE call; falls through
to no-op when the assertion fails (test-mock case).
- a2a_proxy_test.go: keep both new test suites — branch's
TestApplyIdleTimeout_* (3 cases for the idle-timeout helper) AND
staging's TestDispatchA2A_RejectsUnsafeURL (#1483 regression). Updated
the staging test's dispatchA2A call to pass the workspaceID arg
introduced by the branch's signature change.
- workspace_crud.go: combine both Delete-cleanup intents:
* Branch's cleanupCtx detachment (WithoutCancel + 30s) so canvas
hang-up doesn't cancel mid-Docker-call (the container-leak fix)
* Branch's stopAndRemove helper that skips RemoveVolume when Stop
fails (orphan sweeper handles)
* Staging's #1843 stopErrs aggregation so Stop failures bubble up
as 500 to the client (the EC2 orphan-instance prevention)
Both concerns satisfied: cleanup runs to completion past canvas
hangup AND failed Stop calls surface to caller.
Build clean, all platform tests pass.
🤖 Generated with [Claude Code](https://claude.com/claude-code)