Commit Graph

3262 Commits

Author SHA1 Message Date
Hongming Wang
8d77de68c4 docs: update ecosystem-watch date to 2026-04-27 2026-04-27 14:39:35 -07:00
44dc3c6943
Merge pull request #2189 from Molecule-AI/fix/delegate-task-retry-transient
fix(a2a): auto-retry transient transport errors in send_a2a_message (up to 5x)
2026-04-27 20:58:47 +00:00
Hongming Wang
e87a9c3858 fix(a2a): auto-retry transient transport errors in send_a2a_message
Three different intermittent failures observed during a single
manual-test session — RemoteProtocolError, ReadTimeout, ConnectError —
each surfaced as a "Failed to deliver to <peer>" error chip in the
canvas Agent Comms panel even though the next attempt would have
succeeded (verified by direct probes from the same source workspace
to the same peer). The error message even told the user "Usually a
transient network blip — retry once," but it left the retry to a
human reading the error message.

Auto-retry inside send_a2a_message itself: up to 5 attempts (1
initial + 4 retries) with exponential backoff (1s, 2s, 4s, 8s,
16s-capped), each backoff jittered ±25% to break sync across
siblings. Cumulative wall-clock capped at 600s by
_DELEGATE_TOTAL_BUDGET_S so a string of 5×300s ReadTimeouts can't
make the caller wait 25 minutes — once the deadline elapses, retries
stop even if attempts remain.

Retry only on transport-layer transients:
  - ConnectError / ConnectTimeout (peer's listening socket not ready)
  - RemoteProtocolError (peer closed TCP without writing — observed
    when a peer's prior in-flight Claude SDK session aborted)
  - ReadError / WriteError (network blip on Docker bridge)
  - ReadTimeout (peer wrote no response in 300s)

Application-level errors are NOT retried — they're deterministic and
retrying just wastes wall-clock:
  - HTTP 4xx (peer rejected the request format)
  - JSON parse failures (peer returned garbage)
  - JSON-RPC error in response body (peer's runtime errored cleanly)
  - Programmer-bug exceptions (ValueError, etc.)

8 new tests pin the contract:
  - retry succeeds after 2 RemoteProtocolErrors
  - retry succeeds after 1 ConnectError
  - all 5 attempts fail → returns formatted last-error
  - capped at exactly _DELEGATE_MAX_ATTEMPTS (regression cover for
    "did someone bump the constant accidentally?")
  - JSON-RPC error response NOT retried (1 attempt only)
  - non-httpx exception NOT retried (programmer bugs stay loud)
  - total budget caps the loop even if attempts remain
  - backoff schedule grows exponentially with ±25% jitter

Refactor: extracted _format_a2a_error() so the success and exhausted
paths share one error-formatting routine. _delegate_backoff_seconds()
is a pure function so the schedule is unit-testable without monkey-
patching asyncio.sleep.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 13:52:01 -07:00
b5441b8c09
Merge pull request #2188 from Molecule-AI/fix/cascade-stop-removal-in-progress
fix(workspace-server): cascade-delete race + ACTIVITY_LOGGED body fidelity
2026-04-27 20:46:46 +00:00
Hongming Wang
c91c09dc55 fix(activity): include request/response bodies in ACTIVITY_LOGGED broadcast
Canvas Agent Comms bubbles for outbound delegation showed only
"Delegating to <peer>" boilerplate during the live update window —
the actual task text only surfaced after a refresh re-fetched the row
from /workspaces/:id/activity. Symptom flagged today during a fresh
delegation manual test where the bubble said "Delegating to Perf
Auditor" instead of the user's "audit moleculesai.app for
performance" prompt.

Root cause: LogActivity's broadcast payload at activity.go:510-518
deliberately omitted request_body and response_body, so the canvas's
live-update path (AgentCommsPanel.tsx:271-289) saw `p.request_body =
undefined` and toCommMessage fell back to the
`Delegating to ${peerName}` template string. The DB row stored the
real task / reply, which is why GET-on-mount worked.

Fix: include both bodies in the broadcast as json.RawMessage values
(no re-marshal cost — they were already encoded for the DB insert
above). Same pattern as tool_trace, which has been included since #1814.

Each side is bounded by the workspace-side caller's own caps: the
runtime's report_activity helper caps error_detail at 4096 chars and
summary at 256; request/response are constrained by the runtime's
own limits — typical delegate_task payload is hundreds of chars to a
few KB. If a much-larger broadcast becomes a concern later, a soft
cap can be added at this site without breaking the contract.

Two regression tests pin the broadcast shape:
- request_body present → canvas renders the actual task text
- response_body present → canvas renders the actual reply text
- response_body nil → omitted from payload (no empty-bubble flicker)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 13:38:23 -07:00
Hongming Wang
5a7659c54d
Merge pull request #2108 from Molecule-AI/ci/cicd-review-quick-wins
ci: e2e-staging-saas on staging + canary auto-issue thresholded at 3 reds
2026-04-27 20:29:12 +00:00
dccec657d6
Merge branch 'staging' into ci/cicd-review-quick-wins 2026-04-27 13:27:16 -07:00
e0a35a3c77
Merge pull request #2187 from Molecule-AI/fix/mcp-server-path-wheel-relative
fix(runtime): use lowercase wire role for v0.3 JSON-RPC compat layer
2026-04-27 20:27:03 +00:00
Hongming Wang
92d99d96fe fix(provisioner): treat "removal already in progress" as no-op success
Cascade-deleting a 7-workspace org returned 500 with

  "workspace marked removed, but 2 stop call(s) failed — please retry:
   stop eeb99b5d-...: force-remove ws-eeb99b5d-607: Error response
   from daemon: removal of container ws-eeb99b5d-607 is already in
   progress"

even though the DB-side post-condition succeeded (removed_count=7) and
the containers WERE removed shortly after. The fanout fired Stop() on
every workspace concurrently and the orphan sweeper happened to reap
two of them at the same instant, so Docker rejected the second
ContainerRemove with "removal already in progress" — a race-condition
ack, not a real failure. Retrying just races the same in-flight
removal.

The post-condition we care about (the container WILL be gone) is
identical to a successful removal, so Stop() should treat it the
same way it already treats "No such container" — a no-op return nil
that lets the caller proceed with volume cleanup. Real daemon
failures (timeout, EOF, ctx cancel) still surface as errors.

Two pieces:

  - New isRemovalInProgress() predicate using the same string-match
    approach as isContainerNotFound (docker/docker has no typed
    errdef for this; the CLI itself relies on the message).

  - Stop() now treats the predicate as success, with a log line
    distinct from the not-found path so debugging can tell which
    race fired.

Both substrings ("removal of container" + "already in progress") must
match — "already in progress" alone would false-positive on unrelated
operations like image pulls. Truth table pinned in 7 new test cases.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 13:25:32 -07:00
Hongming Wang
93e8e5329b
Merge pull request #2173 from Molecule-AI/deps/postcss-8.5.10-ghsa-qx2v-qp2m-jg93
deps(canvas): bump postcss 8.5.9 → 8.5.12 (GHSA-qx2v-qp2m-jg93, medium)
2026-04-27 20:20:13 +00:00
Hongming Wang
18b21d420e
Merge pull request #2185 from Molecule-AI/fix/canvas-send-button-stuck-after-ws-reply
fix(canvas): clear sendInFlightRef on WS-push reply path
2026-04-27 20:16:39 +00:00
Hongming Wang
4028b81e04 refactor(canvas): route panel WS subscriptions through global socket
Both AgentCommsPanel and ChatTab's activity-feed opened raw
`new WebSocket(WS_URL)` instances per mount, with no onclose handler
and no reconnect logic. When the underlying connection dropped — idle
timeout, browser background-tab throttle, network jitter — the per-
panel sockets stayed dead until the panel re-mounted (refresh or
sub-tab unmount/remount). Live agent-comms bubbles and live activity
feed lines silently went missing in the gap, manifesting as "the
delegation didn't show up until I refreshed."

The global ReconnectingSocket in store/socket.ts already owns
reconnect, exponential backoff, health-check, and HTTP fallback poll.
Routing component subscribers through it gives every consumer those
guarantees for free, with one TCP connection per tab instead of N.

Three new pieces:

  - store/socket-events.ts: tiny pub/sub bus. emitSocketEvent fan-outs
    every decoded WSMessage to the listener Set; subscribeSocketEvents
    returns an unsubscribe. A throwing listener is logged and isolated
    so it can't break siblings.

  - store/socket.ts: ws.onmessage now calls emitSocketEvent(msg) right
    after applyEvent(msg), so the store's derived state and component
    subscribers stay in lockstep on every event arrival.

  - hooks/useSocketEvent.ts: React hook that registers exactly once
    per mount, capturing the latest handler in a ref so the closure
    sees current state/props without re-subscribing on every render.

Refactored sites:

  - AgentCommsPanel: replaced its WebSocket-in-useEffect block with
    useSocketEvent. Same parsing logic; the panel no longer opens its
    own connection.

  - ChatTab activity feed: split the previous useEffect in two — one
    seeds the activity log when `sending` flips, the other subscribes
    unconditionally and gates work on `sending` inside the handler.
    Hooks can't be conditional, so the gate has to live in the body
    rather than around the effect.

The ws-close graceful-close helper is no longer needed in either
site; the global socket owns its own teardown.

Tests: 6 new tests for the bus contract (single delivery, fan-out
order, unsubscribe, throwing-listener isolation, no-subscriber emit,
duplicate-subscribe Set semantics). All 27 existing socket tests
still pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 13:12:47 -07:00
Hongming Wang
81c4c1321c fix(runtime): use lowercase wire role for v0.3 JSON-RPC compat layer
Manual-test failure surfaced what was hidden behind the MCP-path bug:
once delegate_task could actually fire, every cross-workspace call
came back as JSON-RPC -32600 "Invalid Request" with the underlying
pydantic ValidationError:

    params.message.role
      Input should be 'agent' or 'user' [type=enum,
      input_value='ROLE_USER', input_type=str]

PR #2184's a2a-sdk 1.x migration sweep over-corrected: it changed
every `"role": "user"` literal in JSON-RPC payload construction to
`"role": "ROLE_USER"` to match the protobuf enum names of the 1.x
native types (a2a.types.Role.ROLE_USER / ROLE_AGENT). That was
correct for in-process Message construction (which the SDK
serialises before wire transmission) but WRONG for the 8 sites that
hand-build JSON-RPC payloads. The workspace's own a2a-sdk runs
inbound requests through the v0.3 compat adapter
(/usr/local/lib/python3.11/site-packages/a2a/compat/v0_3/) because
main.py sets enable_v0_3_compat=True for backwards compatibility,
and that adapter validates against the v0.3 Pydantic Role enum
(`agent` | `user` lowercase). The protobuf-style names blow it up.

Reverted the 8 wire-payload sites to lowercase:
  - workspace/a2a_client.py:74
  - workspace/a2a_cli.py:74, 111
  - workspace/heartbeat.py:378
  - workspace/main.py:464, 563
  - workspace/builtin_tools/a2a_tools.py:60
  - workspace/builtin_tools/delegation.py:272

Native-type usage at workspace/a2a_executor.py:471 (`Role.ROLE_AGENT`)
stays — that's an in-process Message construction; the SDK handles
wire serialisation correctly.

Updated the misleading comment at main.py:255-257 (which said
"outbound payloads are now 1.x-shaped (ROLE_USER)") to spell out
the actual rule: outbound JSON-RPC wire payloads MUST use v0.3
shape, native types are only for in-process construction.

New regression test test_jsonrpc_wire_role_format.py greps the 6
wire-payload-emitting files for any "ROLE_USER" / "ROLE_AGENT"
string literal and fails loud — cheapest possible drift detector.

Why E2E missed it: the priority-runtimes harness sends a single
message canvas → workspace, but the canvas already used lowercase
"user" (it never went through the migration sweep). The bug only
surfaces on workspace → workspace delegation, which the harness
doesn't exercise. Same gap as #131 (extend smoke to call main()
against a stub).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 12:40:11 -07:00
Hongming Wang
db3b472bc9
Merge pull request #2186 from Molecule-AI/fix/mcp-server-path-wheel-relative
fix(runtime): legacy /app/ path leaks across MCP server + agent prompts + docstrings
2026-04-27 19:32:33 +00:00
Hongming Wang
49ded74876 docs(cli-runtime): use module-form invocation, drop dead shell-alias claim
Same root cause as the workspace/molecule_ai_status.py docstring fix
in this PR: this doc claimed `molecule-monorepo-status` was a usable
shell alias and `from molecule_ai_status import set_status` was a
usable Python import. Both worked under the pre-#87 monolithic-template
layout (where workspace/Dockerfile created the symlink and COPY'd the
modules into /app/) but neither works in current standalone template
images that install the runtime as a wheel:

- `which molecule-monorepo-status` errors — only `a2a-db` and
  `molecule-runtime` are registered console scripts.
- `from molecule_ai_status` raises ImportError — modules are under the
  `molecule_runtime` package now.

Switched both examples to the canonical `python3 -m
molecule_runtime.molecule_ai_status` form (CLI) and `from
molecule_runtime.molecule_ai_status import set_status` (Python). Same
form the runtime ships in its own usage banner, so anyone discovering
this doc gets a runnable example.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 12:27:50 -07:00
Hongming Wang
f7ad5a82f7 fix(canvas): release sendInFlightRef in the activity-log WS path too
Third-pass review caught a fourth WS path I missed. The original fix +
the stale-callback follow-up patched 3 sites that release the in-flight
guards (pendingAgentMsgs effect, HTTP .then() success, HTTP .catch()
success), but the ACTIVITY_LOGGED handler at lines 410-419 also clears
`sending` + `sendingFromAPIRef` when the platform logs the workspace's
a2a_receive ok/error. It only cleared 2 of the 3 refs — same exact
bug class as the original. If THIS path wins the race (a2a_receive
activity logged before pendingAgentMsgs delivers the reply text),
sendInFlightRef stays stuck true and the next sendMessage() silently
no-ops at line 464.

Fix: route both branches (ok and error) through releaseSendGuards()
so all four sites are now uniform.

Updated the helper's docstring to explicitly list all four sites and
warn that any future "I saw the reply" path that only clears the
natural pair (sending + sendingFromAPIRef) will silently re-introduce
the freeze. The disabled-button logic can't see sendInFlightRef so
the visible state diverges from the synchronous re-entry guard
otherwise.

This is exactly the drift `releaseSendGuards()` was supposed to
prevent — the helper landed in the prior commit but the activity-log
site wasn't migrated to use it. Fixing now closes the gap.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 12:27:29 -07:00
Hongming Wang
9c3695df6d test(runtime): update molecule_ai_status test for renamed error prefix
Pre-existing test_set_status_exception_prints_to_stderr asserted on the
legacy "molecule-monorepo-status: failed to update" prefix string. The
prior commit renamed it to "molecule_ai_status: failed to update" so
the printed label matches the canonical module-form invocation
(`python3 -m molecule_runtime.molecule_ai_status`) instead of a shell
alias that only ever existed in the dev-only base image. Updating the
expected substring in lockstep.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 11:48:05 -07:00
Hongming Wang
cacf499354 fix(canvas): close stale-callback race + extract releaseSendGuards helper
Self-review on PR #2185 surfaced a latent race the original fix exposed:
the WS-clears-guards path now releases sendInFlightRef immediately, which
means a user can fire msg #2 between WS-arrival and HTTP-arrival for
msg #1. Without coordination, msg #1's late .then() sees
sendingFromAPIRef=true (set by msg #2's send), enters the main body,
and runs setSending(false) + appendMessageDeduped against msg #1's
response body — clobbering msg #2's in-flight UI state.

This race is realistic for claude-code SDK: the comment at line 294-298
already calls WS the "authoritative reply arrived" signal, and the user
typically reads-then-types before the trailing HTTP completes. Without
the original Send-button freeze "protecting" the race, it surfaces.

Two changes:

1. Token-keyed callbacks. sendTokenRef bumps on every sendMessage
   entry; .then()/.catch() capture the token in closure and bail
   without touching any flags if a newer send has superseded them.
   The newer send owns the in-flight guards.

2. releaseSendGuards() helper. The three-clear-guards trio
   (setSending, sendingFromAPIRef, sendInFlightRef) now lives in one
   useCallback so the WS handler, .then() success, and .catch()
   success can't drift apart. A future contributor dropping one of
   the three would silently re-introduce either the post-WS Send
   freeze or the stale-callback clobber.

Skipped a unit test for this regression — ChatTab has no __tests__
file and a mount test would need WS + zustand + api mocks. The fix
is 4 logical lines (token capture + 2 guard checks) and the manual
test covers it. Follow-up to add a focused mount test when ChatTab
gets its first __tests__ file.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 11:47:12 -07:00
Hongming Wang
28fc7a8cbd fix(runtime): replace remaining /app/ legacy paths in agent prompts + docstrings
Comprehensive sweep follow-up to the MCP server path fix. Audited every
/app/ reference in the runtime source against the live claude-code
template image and confirmed the actual /app/ contents post-#87 are
ONLY: __init__.py, adapter.py, claude_sdk_executor.py, requirements.txt
— every other workspace module ships in the wheel under
site-packages/molecule_runtime/. Two more leaks found:

1. executor_helpers.py:_A2A_INSTRUCTIONS_CLI — inter-agent system prompt
   for non-MCP runtimes (Ollama, custom) had 5 lines telling the model
   `python3 /app/a2a_cli.py X`. Models copy these examples verbatim, so
   every CLI-runtime delegation would fail at the shell layer (no such
   file). Replaced with `python3 -m molecule_runtime.a2a_cli` form,
   which works regardless of where the wheel is installed.

2. molecule_ai_status.py docstring — usage examples invoked
   `python3 /app/molecule_ai_status.py` and claimed a
   `molecule-monorepo-status` shell alias. Both broken in current
   templates: the file's at site-packages, and `which
   molecule-monorepo-status` errors (the legacy symlink only existed
   in the dev-only workspace/Dockerfile base image, not in the
   standalone template Dockerfiles that ship to production).
   Updated docstring + the __main__ usage banner + the stderr error
   prefix to use the same `python3 -m molecule_runtime.X` form.

Plugins audited and clean: WORKSPACE_PLUGINS_DIR=/configs/plugins,
SHARED_PLUGINS_DIR=$PLUGINS_DIR fallback /plugins. No /app/
assumptions.

Regression test: `test_a2a_cli_instructions_use_module_invocation_not_legacy_app_path`
asserts the legacy /app/a2a_cli.py path can't drift back into the CLI
system prompt and that the canonical module form is present.

The legacy workspace/Dockerfile + workspace/entrypoint.sh + workspace/scripts/
still contain /app/-shaped paths but are dev-only base-image scaffolding
(per workspace/build-all.sh's own header comment) — not shipped to the
standalone template images. Out of scope here; can be cleaned up in a
separate dead-code pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 11:22:00 -07:00
Hongming Wang
203a4f0f91 fix(runtime): resolve a2a_mcp_server.py path from wheel install location
DEFAULT_MCP_SERVER_PATH was hardcoded to /app/a2a_mcp_server.py, which
was correct under the pre-#87 monolithic-template Docker layout where
the workspace/ tree was COPY'd into /app/. After the universal-runtime
refactor (#87, #117), workspace modules ship inside the
molecule-ai-workspace-runtime wheel under
site-packages/molecule_runtime/, while /app/ now holds only
template-specific files (adapter.py + the runtime-native executor for
that template).

Net effect: in every workspace built since the wheel cutover, Claude
Code SDK's mcp_servers={"a2a": {"command": python, "args":
["/app/a2a_mcp_server.py"]}} pointed at a missing file. The subprocess
launch failed silently, the SDK registered zero MCP tools, and the
agent's list_peers / delegate_task / a2a_send_message / a2a_send_signal
all disappeared. Symptom observed today: Design Director said
"I tried to reach the perf auditor via the inter-agent MCP tools
(list_peers, delegate_task) but those tools didn't resolve in this
environment" and fell back to running the audit itself with WebFetch.

Why this slipped through E2E: the priority-runtimes harness sends a
single message and verifies a reply — it does not exercise inter-agent
delegation, so the missing MCP tools are invisible at that layer.

Fix: resolve the path relative to executor_helpers.py via __file__,
which tracks wherever the wheel is installed (site-packages today,
anywhere else tomorrow). The A2A_MCP_SERVER_PATH env override is
preserved for tests / non-default layouts.

Regression test: assert os.path.exists(DEFAULT_MCP_SERVER_PATH) so
any future move of a2a_mcp_server.py out of the package directory
fails at unit-test time instead of silently disabling delegation in
production.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 11:15:06 -07:00
Hongming Wang
5faaf58466 fix(canvas): clear sendInFlightRef on WS-push reply path
Send button + Enter both silently no-op'd after the first agent reply
on runtimes that deliver via WebSocket (claude-code SDK does this per
the comment at ChatTab.tsx:294-298). The visible disabled-state checks
(sending, uploading, agentReachable) were all clean — the freeze came
from a third synchronous reentry guard the button can't see:

  if (sendInFlightRef.current) return;     // ChatTab.tsx:438

The ref was set true at the start of sendMessage() and only cleared in
.then() / .catch() of the HTTP fall-through and the upload-failure
branch. The WS-push handler in the pendingAgentMsgs effect cleared
`sending` and `sendingFromAPIRef` but left `sendInFlightRef` stuck
true. The HTTP .then() then early-returned at the dedup check (line
513) without touching the ref — only the .catch() early-return path
did. Net result: refresh fixed it because the ref reset on remount.

Two-line fix:
  - WS handler: also clear sendInFlightRef when the push delivers the
    reply (primary fix; no race window where the ref is stuck while
    the user can already type)
  - .then() early-return: mirror .catch()'s cleanup as defense in
    depth, so neither delivery order leaks the ref

While here: A2AEdge.test.tsx fixture was typed `as never` to dodge
EdgeProps' discriminated-union complaint, which broke spreading at
the call sites with TS2698 ("Spread types may only be created from
object types"). Replaced with `as unknown as ComponentProps<typeof
A2AEdge>` — preserves the original "skip restating every optional
field" intent and keeps a spreadable type.

All 10 A2AEdge tests pass; tsc --noEmit is clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 11:11:58 -07:00
Hongming Wang
9532890f04
Merge pull request #2184 from Molecule-AI/fix/jsonrpc-routes-rpc-url
fix: pass rpc_url='/' to create_jsonrpc_routes (a2a-sdk 1.x)
2026-04-27 16:45:48 +00:00
Hongming Wang
dd57a840b6 fix: comprehensive a2a-sdk 1.x migration sweep across workspace/
Audited every a2a-sdk surface in workspace/ against the installed
1.0.2 wheel. Found and fixed:

main.py (the live workspace startup path):
  • create_jsonrpc_routes(rpc_url='/', enable_v0_3_compat=True) —
    rpc_url required in 1.x; v0.3 compat enables inbound legacy
    clients (`"role": "user"` lowercase) without forcing them to
    upgrade. Pairs with the outbound rename below.

a2a_executor.py:
  • TextPart/FilePart/FileWithUri removed in 1.x. Part is now a
    flat proto message: Part(text=…) / Part(url=…, filename=…,
    media_type=…). Updated the file-attachment branch (only
    reachable when an agent emits files; the harness's PONG path
    didn't exercise this, but it's a latent crash).
  • Message field names: messageId/taskId/contextId →
    message_id/task_id/context_id (proto3 snake_case).
  • Role enum: Role.agent → Role.ROLE_AGENT (proto enum).

Outbound JSON-RPC payloads (8 files):
  • "role": "user" → "role": "ROLE_USER" — proto3 JSON serialization
    is strict about enum values. Sites: a2a_client, a2a_cli, main
    (initial+idle prompts), heartbeat, builtin_tools/a2a_tools,
    builtin_tools/delegation. Wire JSON keys stay camelCase
    (proto3 default), only the role enum value changed.

google-adk/adapter.py:
  • new_agent_text_message → new_text_message (4 sites). This
    adapter's directory has a hyphen, so it can't be imported as a
    Python module — effectively dead code, but the wheel ships the
    file and a future fix should keep it correct against 1.x.

Why one PR instead of seven: every previous a2a-sdk migration find
landed as its own publish → cascade → harness → next-bug cycle.
Today's audit ran every a2a-sdk symbol/type/method in workspace/
against the installed 1.0.2 wheel in a single sweep + tested the
critical paths (Message construction, Part construction, Role enum
parsing) against the actual SDK. Should be the last migration PR.

Verified locally:
  python3 scripts/build_runtime_package.py --version 0.1.99 \
      --out /tmp/build-final
  pip install /tmp/build-final
  python -c "import molecule_runtime.main; \
             from molecule_runtime.a2a_executor import LangGraphA2AExecutor"
  → ✓ all imports clean against a2a-sdk 1.0.2

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 09:42:57 -07:00
Hongming Wang
c80b3ff0eb fix: pass rpc_url='/' to create_jsonrpc_routes (a2a-sdk 1.x requirement)
7th a2a-sdk migration find from the v0 → v1 transition.
create_jsonrpc_routes() now requires rpc_url as a positional arg
(was implicit at root in 0.x). Pass '/' to match
a2a.utils.constants.DEFAULT_RPC_URL — that's also what
workspace-server's a2a_proxy.go forwards to (POSTs to workspace URL
without appending a path).

Symptom before fix: every workspace startup crashed with
  TypeError: create_jsonrpc_routes() missing 1 required positional
  argument: 'rpc_url'

Caught by harness 9 phase 4 (claude-code + langgraph both on
0.1.24). The user's "use langgraph for fast iteration" call cut
the diagnose cycle from 15min to ~30s — without that, this would
have taken another hermes round-trip to surface.

Updated reference_a2a_sdk_v0_to_v1_migration.md memory with this
entry alongside the previous 6 finds.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 09:33:23 -07:00
Hongming Wang
d3d57eb3a7
Merge pull request #2183 from Molecule-AI/fix/default-request-handler-agent-card
fix: pass agent_card to DefaultRequestHandler (a2a-sdk 1.x)
2026-04-27 16:06:36 +00:00
Hongming Wang
6859099a08 fix: pass agent_card to DefaultRequestHandler (a2a-sdk 1.x requirement)
a2a-sdk 1.x added agent_card as a required argument to
DefaultRequestHandler.__init__. main.py constructed it with only
agent_executor + task_store, so every workspace startup that reached
the handler init step crashed with:

  TypeError: DefaultRequestHandlerV2.__init__() missing 1 required
  positional argument: 'agent_card'

This is the 6th a2a-sdk migration find from the v0 → v1 transition
(see reference_a2a_sdk_v0_to_v1_migration memory). Pattern is the
same: SDK exposes a new required arg, our call site needs to pass
the existing object we already construct upstream.

Why the import-only smoke gates didn't catch this: it's a call-time
constructor error inside `async def main()`, not a module load
error. The runtime-pin-compat smoke imports main_sync but doesn't
invoke main() against a real config. Worth filing a follow-up to
extend the smoke to a "construct + dispose" cycle.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 08:53:47 -07:00
Hongming Wang
5920fc856d
Merge pull request #2182 from Molecule-AI/ci/agentcard-smoke-followup-2179
fix(workspace): rename supported_protocols → supported_interfaces (CRITICAL — every boot crashes)
2026-04-27 14:58:28 +00:00
Hongming Wang
851fd21fb1 fix(workspace): rename supported_protocols → supported_interfaces (a2a-sdk 1.0)
CRITICAL: every workspace boot since the a2a-sdk 1.0 migration (#1974)
has been crashing at AgentCard construction with:
  ValueError: Protocol message AgentCard has no "supported_protocols" field

The protobuf field is `supported_interfaces` (plural, interfaces — see
a2a-sdk types/a2a_pb2.pyi:189). The 0.3→1.0 migration left the kwarg
as `supported_protocols`, which doesn't exist in the 1.0 schema, so
the constructor raises before any subsequent line of main runs.

Why this hid for so long:
  - publish-runtime.yml's smoke step only IMPORTED molecule_runtime.main;
    importing the module is fine, only CONSTRUCTING the AgentCard fails
  - The user-visible symptom is "Workspace failed: " with empty
    last_sample_error, indistinguishable from generic boot timeouts
  - The state_transition_history=True bug (fixed in #2179) was a
    sibling of this — same migration, same class, just caught first

Fix is symmetric with #2179:
  1. workspace/main.py: rename the kwarg + comment explaining why
  2. .github/workflows/publish-runtime.yml: extend the smoke block to
     instantiate AgentCard with the exact production call shape, so
     the next field-rename of this class fails at publish time
     instead of breaking every workspace startup

Verification:
  - Constructed AgentCard against fresh a2a-sdk 1.0.2 in a clean
    venv with the corrected kwarg → succeeds
  - Constructed it with the original `supported_protocols` kwarg →
    fails immediately with the exact error production sees
  - Smoke test pinned to mirror main.py's exact call shape; main.py
    + smoke must stay in lockstep going forward

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 07:54:23 -07:00
Hongming Wang
2a39061635
Merge pull request #2181 from Molecule-AI/fix/cascade-pypi-wait-and-paths-filter
fix(publish-runtime): wait for PyPI propagation + expand path filter
2026-04-27 14:48:03 +00:00
Hongming Wang
1a703f5687 fix(publish-runtime): wait for PyPI propagation + expand path filter
Two structural fixes for the cascade race conditions that bit us
five times today:

1. **PyPI propagation wait** (cascade job): poll PyPI for the
   just-published version with a 60s budget BEFORE firing
   repository_dispatch. PyPI accepts the upload but takes a few
   seconds to make it available via the package index. Cascade was
   firing too fast — downstream template builds ran `pip install`
   against a stale index, resolved to the previous version, and
   docker layer cache locked that in for subsequent rebuilds.
   Pairs with the build-arg cache invalidation in molecule-ci PR
   (separate change). Wait without invalidation = next build still
   pip-resolves correctly. Invalidation without wait = first cascade
   build may still race PyPI propagation. Together: no race, no
   stale cache.

2. **Path filter expansion**: scripts/build_runtime_package.py is
   the build script and changes to it (e.g. import-rewrite fixes,
   manifest emit, lib/ subpackage move) directly affect what ships
   in the wheel. Was missing from the path filter, so PRs touching
   only scripts/ (like #2174's lib/ fix) didn't auto-publish — the
   operator had to remember a manual dispatch. Add it to the closed
   list of files that trigger auto-publish.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 07:42:37 -07:00
Hongming Wang
2f5ea7a537
Merge pull request #2180 from Molecule-AI/harness/diagnostic-burst-step2-cp-285
test(e2e): diagnostic burst on step-2 provisioning failure (CP #285)
2026-04-27 14:27:15 +00:00
Hongming Wang
3c345f5674 test(e2e): diagnostic burst on step-2 provisioning failure (CP #285)
Closes the molecule-core-side ask of controlplane #285. CP #289 already
landed migration 022 + the handler change exposing \`last_error\` in
/cp/admin/orgs responses. This makes the canary harness actually USE
that field — pre-fix the harness exited with just "Tenant provisioning
failed for <slug>" and forced operators to scrape CP server logs to
learn WHY.

The diagnostic burst dumps the matched org row from the LIST_JSON
already in scope (no extra HTTP call), pretty-printed and prefixed,
right before \`fail\`. Mirrors the TLS-readiness burst pattern from
PR #2107 at step 4. Includes a not-found fallback for DB-drift cases.

No redaction needed — adminOrgSummary is already ops-safe (id, slug,
name, plan, member_count, instance_status, last_error, timestamps;
no tokens, no encrypted fields).

Verification: smoke-tested both branches (org found with last_error +
slug-not-found fallback) with synthetic JSON; bash syntax OK; the only
shellcheck warning is pre-existing on line 93.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 07:22:12 -07:00
Hongming Wang
11e149f05c
Merge pull request #2179 from Molecule-AI/fix/agent-capabilities-state-transition-history
fix: drop state_transition_history (removed in a2a-sdk 1.x)
2026-04-27 14:22:09 +00:00
Hongming Wang
12d446bc8e docs: explain why state_transition_history is gone (research-backed)
Adds a comment block citing a2a-sdk's own
a2a/compat/v0_3/conversions.py, which says verbatim:

  state_transition_history=None,  # No longer supported in v1.0

So a future reader who notices the missing kwarg won't try to add it
back. The capability is now universal: every v1.x Task carries a
history list and tasks/get supports historyLength via the
apply_history_length helper. No flag because nothing's optional.

Confirmed by reading the SDK source directly:
- a2a/types.py AgentCapabilities exposes only: streaming,
  push_notifications, extensions, extended_agent_card.
- a2a/compat/v0_3/conversions.py explicitly maps None when
  down-converting v1 → v0.3 (deliberate removal, not rename).
- a2a/server/request_handlers/default_request_handler_v2.py uses
  apply_history_length(task, params) — agent doesn't opt in.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 07:20:05 -07:00
Hongming Wang
f531fe1367 fix: drop state_transition_history field — removed in a2a-sdk 1.x
a2a-sdk 1.x's AgentCapabilities only exposes 4 fields:
streaming, push_notifications, extensions, extended_agent_card.
The state_transition_history field was removed in the v1 protobuf
schema. main.py still passed it as a kwarg, so every workspace
that reached the AgentCard construction step (line 188) crashed:

  ValueError: Protocol message AgentCapabilities has no
  "state_transition_history" field

Symptom: every claude-code + hermes workspace stuck in `provisioning`
forever — caught when the user provisioned a Design Director crew
manually via the canvas while harness 5 was running.

Why every prior smoke gate missed it:
- runtime-pin-compat.yml smokes `from molecule_runtime.main import
  main_sync` — only imports the module. AgentCapabilities() runs
  inside `async def main()`, not at module load.
- Template image boot smoke does `import every /app/*.py` — same
  story. main.py imports fine; the field error only fires at call.

The fix is one line — drop the kwarg. Fields we actually need
(streaming + push_notifications) are still passed.

Follow-up worth filing: smoke step that instantiates Adapter() +
calls a no-op setup() against a stub config. That would have
caught this before publish.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 07:16:16 -07:00
Hongming Wang
3d617ec421
Merge pull request #2178 from Molecule-AI/deps/go-redis-9.7.3-ghsa-92cp-5422-2mw7
deps(redis): bump go-redis/v9 v9.7.0 → v9.7.3 (GHSA-92cp-5422-2mw7, low)
2026-04-27 14:00:37 +00:00
Hongming Wang
7acdd21c88
Merge pull request #2177 from Molecule-AI/docs/pr-merge-safety-guards
docs: document the two PR auto-merge safety guards
2026-04-27 13:55:26 +00:00
Hongming Wang
fa5e0f5e4c deps(redis): bump go-redis/v9 v9.7.0 → v9.7.3 (GHSA-92cp-5422-2mw7)
Closes the LOW-severity dependabot alert on workspace-server's go-redis
pin. Upstream advisory GHSA-92cp-5422-2mw7: "go-redis allows potential
out-of-order responses when CLIENT SETINFO times out" — fixed in 9.7.3.

Patch bump within the v9.7 line; semver guarantees no API change.
Full workspace-server test suite passes (18/18 packages clean).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 06:54:13 -07:00
Hongming Wang
6589929f87 docs: document the two PR auto-merge safety guards
Adds a section to CONTRIBUTING.md → "Pull Requests" explaining the two
system-level guards that protect against the "I enabled auto-merge then
pushed more commits" race:

1. Repo-wide setting: "Automatically delete head branches" (catches
   pushes to a merged-and-deleted branch — the post-merge orphan case).
2. CI workflow `pr-guards` calling molecule-ci's
   disable-auto-merge-on-push (catches pushes during queue
   processing — disables auto-merge, posts a comment, requires
   explicit re-engage).

Why doc-not-just-memory: my agent-side memory is local. Other
contributors on other machines need this in the repo where they
read it. Cites the 2026-04-27 PR #2174 incident with the
specific commit SHAs that got orphaned.

Companion: molecule-ci README updated separately to document the
reusable workflow under "What each workflow validates" so devs
who land in the molecule-ci repo first can find the contract.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 06:45:55 -07:00
Hongming Wang
b96f99da0f
Merge pull request #2175 from Molecule-AI/deps/docker-v28.5.2-ghsa-x4rx-4gw3-53p4
deps(docker): bump docker/docker v28.2.2 → v28.5.2 (GHSA-x4rx-4gw3-53p4, medium)
2026-04-27 13:42:29 +00:00
Hongming Wang
182de6f2b3
Merge pull request #2176 from Molecule-AI/feat/pr-guards-caller
ci: add pr-guards caller (disable auto-merge on push)
2026-04-27 13:42:17 +00:00
Hongming Wang
82b366fce5 ci: add pr-guards caller that disables auto-merge on push
Thin caller for molecule-ci's reusable disable-auto-merge-on-push
workflow. Forces operator re-engagement when a commit is pushed to
an open PR with auto-merge already enabled.

Pairs with the org-wide "Automatically delete head branches" repo
setting (also enabled today). Defense in depth:

1. Repo setting blocks pushes to a merged-and-deleted branch
   (post-merge orphan case — what bit #2174 today: my second
   commit landed on an already-merged-and-deleted branch).
2. This workflow catches in-queue races (push lands while the
   merge queue is processing) by disabling auto-merge so the
   operator must explicitly re-engage.

Together they cover the full lifecycle of "auto-merge enabled →
new commits arrive" without relying on operator discipline.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 06:39:31 -07:00
Hongming Wang
394dda2a4a deps(docker): bump docker/docker v28.2.2 → v28.5.2 (GHSA-x4rx-4gw3-53p4)
Closes the medium-severity dependabot alert #7 on workspace-server's
docker pin: "Moby firewalld reload makes published container ports
accessible from remote hosts" — fixed in v28.3.3, pulling v28.5.2
(latest in the v28 line).

Patch+minor bump within the v28 train; no client-API breaks
(workspace-server only uses docker.Client for container exec /
inspect, all stable since v20+).

Verification: full workspace-server test suite passes (18/18 packages
clean). Build clean.

Out of scope:
  - Alerts #10 and #11 (the AuthZ bypass + plugin-priv off-by-one)
    require v29.3.1, which is not yet published to the Go module
    proxy (latest published is v28.5.2). They'll close in a follow-up
    PR once v29 lands as a Go module.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 06:26:53 -07:00
Hongming Wang
fc60b4bc5e fix(canvas): regenerate lockfile under Node 20 for npm ci compatibility
The first commit on this branch left the lockfile inconsistent for
Node 20's npm 10:

  npm error \`npm ci\` can only install packages when your package.json
  and package-lock.json are in sync. Please update your lock file...
  npm error Missing: @emnapi/runtime@1.10.0 from lock file
  npm error Missing: @emnapi/core@1.10.0 from lock file

Root cause: my local install ran on Node 24 / npm 11, which doesn't
write peer-optional transitive entries (@img/sharp-* declares
@emnapi/runtime as peerOptional). The Canvas tabs E2E job uses Node 20
/ npm 10, which DOES expect those entries and rejected the lockfile
with EUSAGE.

Regenerated the lockfile under Node 20.19.4 (matches the lowest CI
node version, lockfile is forward-compatible with 22 and 24). 6 new
@emnapi/* entries added; postcss stays at 8.5.12 (the original goal
of this branch).

Verification:
  - \`nvm use 20 && npm ci\` clean
  - 1148/1148 vitest pass under Node 20

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 06:24:48 -07:00
Hongming Wang
a354ae2feb
Merge pull request #2174 from Molecule-AI/fix/lib-subpackage-and-drift-gate
fix(build): ship lib/ subpackage + extend drift gate to SUBPACKAGES
2026-04-27 13:07:00 +00:00
Hongming Wang
6e732ab714 fix(build): ship lib/ subpackage + extend drift gate to SUBPACKAGES
Two compounding bugs that bit hermes (and any other workspace that
reaches main.py:142):

1. workspace/lib/ was in EXCLUDE_DIRS so the published wheel didn't
   contain the directory at all. main.py imports `from lib.pre_stop
   import read_snapshot` (and `build_snapshot`, `write_snapshot`) so
   every workspace startup that reaches the snapshot path crashed
   with `ModuleNotFoundError: No module named 'lib'`.

2. Even if lib/ had shipped, `lib` wasn't in SUBPACKAGES so the
   import-rewriter would have left the bare `from lib.pre_stop`
   unqualified — it would still fail because the package would only
   be reachable as `molecule_runtime.lib`.

Fix: move `lib` from EXCLUDE_DIRS to SUBPACKAGES (one entry each).

Drift gate extension: the existing gate I added in #2163 only
asserted TOP_LEVEL_MODULES against workspace/*.py. This change adds
the symmetric assertion for SUBPACKAGES against workspace/<dir>/
(filtered by EXCLUDE_DIRS + presence of __init__.py). Catches both:
- Subpackage added to workspace/ but missed in SUBPACKAGES
- Subpackage missing from workspace/ but lingering in SUBPACKAGES
- Subpackage wrongly in EXCLUDE_DIRS while also referenced by
  rewritten imports (the lib case)

Tested locally: build of 0.1.99 now ships lib/ and main.py contains
`from molecule_runtime.lib.pre_stop import ...` correctly rewritten.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 06:03:46 -07:00
Hongming Wang
1100c50da8
Merge pull request #2172 from Molecule-AI/feat/e2e-cover-all-8-runtimes
feat(e2e): extend priority-runtimes test to cover all 8 templates
2026-04-27 13:00:43 +00:00
Hongming Wang
6365e94213 deps(canvas): bump postcss 8.5.9 → 8.5.12 (GHSA-qx2v-qp2m-jg93)
Closes the medium-severity dependabot alert on canvas/package-lock.json.
Upstream advisory GHSA-qx2v-qp2m-jg93: "PostCSS has XSS via Unescaped
</style> in its CSS Stringify Output" — fixed in 8.5.10. We pull
8.5.12 since it's already published in the ^8.5.10 line.

package.json's caret range bumps from ^8.4.0 to ^8.5.12 — wider floor
prevents a future install from re-pinning below the safe version. The
8.x major-line constraint is preserved, so no breaking-change risk.

Verification: full canvas vitest suite passes (1148/1148 across
78 files).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 05:59:02 -07:00
Hongming Wang
c7478af99f feat(e2e): extend priority-runtimes test to cover all 8 templates
Tonight's wire-real E2E sweep exposed 12+ root causes across the post-
#87 template extraction. Most would have been caught by an actual
provision-and-online test running on each template — but the test only
covered claude-code + hermes. Extending it to cover all 8 ensures any
future regression in any template fails the test, not production.

What's added:
- run_openai_runtime(runtime, label): generic provisioner for the 5
  OpenAI-backed templates (langgraph, crewai, autogen, deepagents,
  openclaw). Same shape as run_hermes minus the HERMES_* config block
  that hermes-agent needs.
- run_gemini_cli: separate function — gemini-cli wants a Google AI
  key (E2E_GEMINI_API_KEY), not OpenAI.
- Each new runtime registered in the dispatch loop. New `all` keyword
  for E2E_RUNTIMES runs every covered runtime.

claude-code + hermes keep their dedicated functions; both have unique
provisioning quirks (claude-code OAuth + claude-code-specific volume
mounts; hermes 15-min cold-boot) that don't generalize cleanly.

Skip-if-no-key pattern matches the existing one — partially-keyed CI
gets clean skips, not false-fails.

Usage:
  E2E_OPENAI_API_KEY=... E2E_RUNTIMES=langgraph     ./test_priority_runtimes_e2e.sh
  E2E_OPENAI_API_KEY=... E2E_RUNTIMES=all           ./test_priority_runtimes_e2e.sh

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 05:57:59 -07:00
Hongming Wang
1a2ddb4539
Merge pull request #2171 from Molecule-AI/deps/jwt-go-v5.2.2-cve-2025-30204
deps(jwt): bump golang-jwt/jwt/v5 v5.2.1 → v5.2.2 (CVE-2025-30204, HIGH)
2026-04-27 12:44:54 +00:00