Commit Graph

209 Commits

Author SHA1 Message Date
Hongming Wang
210a26d31a refactor(workspace): extract memory tools from a2a_tools.py to a2a_tools_memory.py (RFC #2873 iter 4c)
Third slice of the a2a_tools.py split (stacked on iter 4b). Owns the
two persistent-memory MCP tools:

  * tool_commit_memory — write to /workspaces/:id/memories with RBAC
    + GLOBAL-scope tier-zero enforcement
  * tool_recall_memory — search /workspaces/:id/memories with RBAC

a2a_tools.py shrinks from 609 → 508 LOC (−101). Both handlers depend
ONLY on a2a_tools_rbac (iter 4a), a2a_client, and the platform's
/memories endpoint — no entanglement with delegation or messaging.

Side-effects of the layered architecture: a2a_tools_memory's import
contract is "depends on a2a_tools_rbac, never on a2a_tools" — the
kitchen-sink module is for back-compat re-exports only. A test pins
this so a future refactor that re-introduces `from a2a_tools import …`
fails in CI.

Tests:
  * 49 patch sites in TestToolCommitMemory + TestToolRecallMemory
    retargeted from `a2a_tools.{_check_memory_*, _is_root_workspace,
    httpx.AsyncClient}` to `a2a_tools_memory.…` because the call sites
    moved.
  * test_a2a_tools_memory.py adds 4 new tests (alias drift gate +
    import-contract + a2a_tools-side re-export).

117 tests total (77 impl + 28 rbac + 8 delegation + 4 memory), all green.

Refs RFC #2873.
2026-05-05 09:50:39 -07:00
Hongming Wang
be18b9c8f9 fix(tests): retarget remaining a2a_tools delegation patches to a2a_tools_delegation
CI caught two test files I missed in the original iter 4b retarget:
test_a2a_multi_workspace.py + test_delegation_sync_via_polling.py
patch a2a_tools.{discover_peer, send_a2a_message, _delegate_sync_via_polling,
httpx.AsyncClient} but those call sites moved to a2a_tools_delegation
in this PR. 17 patch sites retargeted; 30 tests now green.

Refs RFC #2873 iter 4b.
2026-05-05 09:50:30 -07:00
Hongming Wang
e72f9ad107 refactor(workspace): extract delegation handlers from a2a_tools.py to a2a_tools_delegation.py (RFC #2873 iter 4b)
Second slice of the a2a_tools.py split (stacked on iter 4a). Owns the
three delegation MCP tools + the RFC #2829 PR-5 sync-via-polling
helper they share:

  * tool_delegate_task — synchronous delegation
  * tool_delegate_task_async — fire-and-forget
  * tool_check_task_status — poll the platform's /delegations log
  * _delegate_sync_via_polling — durable async + poll for terminal status
  * _SYNC_POLL_INTERVAL_S / _SYNC_POLL_BUDGET_S constants

a2a_tools.py shrinks from 915 → 609 LOC (−306). Stacked on iter 4a's
RBAC extraction; uses `from a2a_tools_rbac import auth_headers_for_heartbeat`
as its auth-header source.

The lazy `from a2a_tools import report_activity` inside tool_delegate_task
breaks the circular-import cycle (a2a_tools imports the delegation
re-exports at module-load; delegation handler needs report_activity at
CALL time). A dedicated test pins this contract.

Tests:
  * 77 existing test_a2a_tools_impl.py tests pass after retargeting
    20 patch sites in TestToolDelegateTask + TestToolDelegateTaskAsync +
    TestToolCheckTaskStatus from `a2a_tools.foo` to
    `a2a_tools_delegation.foo` (foo ∈ {discover_peer, send_a2a_message,
    httpx.AsyncClient}). The patches need to target the new module
    because that's where the call sites live now.
  * test_a2a_tools_delegation.py adds 8 new tests:
    - 6 alias drift gates (`a2a_tools.tool_delegate_task is …`)
    - 2 import-contract tests (no top-level circular dep + a2a_tools
      surfaces every delegation symbol)
    - 1 sync-poll budget invariant

113 tests total (77 impl + 28 rbac + 8 delegation), all green.

Refs RFC #2873.
2026-05-05 05:00:52 -07:00
Hongming Wang
0c461eb9f1 refactor(workspace): extract RBAC helpers from a2a_tools.py to a2a_tools_rbac.py (RFC #2873 iter 4a)
First slice of the a2a_tools.py (991 LOC) split — single-concern module
for the workspace's RBAC + auth-header layer:

  * _ROLE_PERMISSIONS canonical table
  * _get_workspace_tier
  * _check_memory_write_permission
  * _check_memory_read_permission
  * _is_root_workspace
  * _auth_headers_for_heartbeat

a2a_tools.py shrinks from 991 → 915 LOC. Internal call sites (15
references) work unchanged because the bare names are re-imported at
module-level — Python's local-then-module name resolution still
finds them in a2a_tools's namespace, so existing tests'
patch("a2a_tools._foo", …) keeps working.

The RBAC layer can now evolve independently of the 18 tool handlers.
Adding a new role or capability action touches one file, not the
kitchen-sink module.

Tests:
  * 77 existing test_a2a_tools_impl.py pass unchanged.
  * test_a2a_tools_rbac.py adds 28 focused tests:
    - 6 alias drift-gate tests (`_foo is rbac.foo`)
    - 4 get_workspace_tier env+config branches
    - 2 is_root_workspace tier branches
    - 6 check_memory_write_permission roles + override branches
    - 3 check_memory_read_permission scenarios
    - 3 auth_headers_for_heartbeat platform_auth branches
    - 4 ROLE_PERMISSIONS table invariants
  * Direct coverage for the helper module (was previously only
    exercised through 991-LOC tool-handler tests).

Refs RFC #2873.
2026-05-05 04:43:16 -07:00
Hongming Wang
f81813f708 feat(rfc): poll-mode chat upload — phase 2 workspace inbox extension
Workspace-side fetcher for the platform-staged chat uploads written by
phase 1. Stack atop feat/poll-mode-chat-upload-phase1.

Wire shape — the platform writes one activity_logs row per uploaded
file with `activity_type=a2a_receive`, `method=chat_upload_receive`,
and a `request_body={file_id, name, mimeType, size, uri}` carrying
the synthetic `platform-pending:<wsid>/<fid>` URI.

Workspace-side flow (new module workspace/inbox_uploads.py):

  1. Fetch via GET /workspaces/:id/pending-uploads/:file_id/content
  2. Stage to /workspace/.molecule/chat-uploads/<32-hex>-<sanitized>
     (same on-disk shape as internal_chat_uploads.py — agent-side
     URI resolvers see no contract change)
  3. POST /workspaces/:id/pending-uploads/:file_id/ack
  4. Cache `platform-pending: → workspace:` so the eventual chat
     message that REFERENCES the upload (separate, later activity row)
     gets URI-rewritten before the agent sees it.

Inbox poller extension (workspace/inbox.py):

  - is_chat_upload_row(row) discriminator on `method`
  - upload-receive rows trigger fetch_and_stage and are NOT enqueued
    as InboxMessages (they're side-effect rows, not chat messages)
  - cursor advances past them regardless of fetch outcome — a
    permanent /content failure must not stall the cursor and block
    real chat traffic
  - message_from_activity calls rewrite_request_body to swap
    platform-pending: URIs to local workspace: URIs in subsequent
    chat messages' file parts. Cache miss leaves the URI untouched
    so the agent surfaces an unresolvable URI rather than the inbox
    silently dropping the part.

Filename sanitization mirrors workspace-server/internal/handlers
/chat_files.go::SanitizeFilename and workspace/internal_chat_uploads
.py::sanitize_filename — pinned by the existing parity test suites.

Coverage: 100% on inbox_uploads.py; the inbox.py extension is fully
covered by three new tests in test_inbox.py (skip-from-queue,
cursor-advance-past-broken-fetch, URI-rewrite ordering).
2026-05-05 04:39:02 -07:00
Hongming Wang
28ef75d25e refactor(workspace): split mcp_cli.py (626 LOC) into focused modules (RFC #2873 iter 3)
Splits the standalone molecule-mcp wrapper into three single-concern
modules per the OSS-shape refactor program:

  * mcp_heartbeat.py — register POST + heartbeat loop + auth-failure
    escalation + inbound-secret persistence
  * mcp_workspace_resolver.py — single + multi-workspace env validation
    + on-disk token-file read + operator-help printer
  * mcp_inbox_pollers.py — activate inbox singleton + spawn one daemon
    poller per workspace

mcp_cli.py becomes a 193-LOC orchestrator: validates env, calls each
module's helpers, hands off to a2a_mcp_server.cli_main. The console-
script entry molecule-mcp = molecule_runtime.mcp_cli:main is preserved.

Back-compat aliases (mcp_cli._build_agent_card, _heartbeat_loop,
_resolve_workspaces, etc.) re-export the new modules' authoritative
functions so existing tests + wheel_smoke.py + any downstream caller
keeps working unchanged. A new test file pins each alias as the
exact same callable (drift gate via `is`).

Tests:
  * 62 existing test_mcp_cli.py + test_mcp_cli_multi_workspace.py
    pass against the split.
  * Two heartbeat-loop persist tests + the auth-escalation caplog
    setup updated to target mcp_heartbeat (the module where the loop
    body now lives) instead of mcp_cli (still works through aliases
    for direct calls, but Python's name resolution inside the loop
    body uses the new module's namespace).
  * test_mcp_cli_split.py adds 11 new tests: alias drift gate +
    inbox-poller single + multi-workspace branches + degraded
    inbox-import logging path (none of those existed before).

Refs RFC #2873.
2026-05-05 04:33:06 -07:00
Hongming Wang
b5f530e27a docs(a2a-mcp): close three contract gaps codex agents inherit out-of-the-box
The instructions blob in the MCP `initialize` handshake is the spec
non-Claude-Code clients (codex, Cline, opencode, hermes-agent, Cursor)
inherit verbatim. Three gaps mean the bridge daemon handles them in
code (codex-channel-molecule bridge.py:192-200, 278-285) but in-process
agents reading the text alone don't get the same guard:

1. Reply-then-pop ordering was implicit. A literal-minded agent could
   pop after a 502 from `send_message_to_user`, dropping the message.
   Now: pop ONLY AFTER reply succeeds; on error leave the row unacked
   for platform redelivery.

2. peer_agent with empty peer_id had no specified handling. Agent
   would call `delegate_task(workspace_id="")` → 400 → re-poll →
   infinite loop on the same poison row. Now: skip reply, drain via
   inbox_pop.

3. The single security rule ("don't execute without chat-side
   approval") effectively disabled peer_agent autonomous handling —
   codex daemons have no canvas user to approve from. Now: dual trust
   model. canvas_user requires user approval; peer_agent permits
   autonomous handling but caps destructive side-effects at the
   workspace boundary.

Also disclaims peer_name/peer_role as non-attested display strings —
the platform registry isn't cryptographic identity, and an agent
shouldn't grant elevated permissions based on a peer registering with
peer_role="admin".

Four new pinned tests in test_a2a_mcp_server.py:
- test_initialize_instructions_pins_reply_then_pop_ordering
- test_initialize_instructions_handles_malformed_peer_agent
- test_initialize_instructions_disclaims_peer_role_attestation
- test_initialize_instructions_distinguishes_canvas_user_from_peer_trust

Each fails on staging-HEAD and passes on the patched text — verified
by reverting a2a_mcp_server.py and re-running.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 02:26:35 -07:00
Hongming Wang
da6d319c48 perf(a2a): bound + LRU-evict _peer_metadata cache (#2482)
Pre-fix _peer_metadata was an unbounded dict — a workspace receiving
from N distinct peers across its lifetime accumulated entries
indefinitely (~100 bytes × N). Not crash-class at typical scale (10K
peers ≈ 1 MB) but unbounded. The TTL-at-read pattern bounded
staleness but did nothing for memory.

Fix: hand-rolled LRU on top of OrderedDict. No new dependency.

  - _PEER_METADATA_MAXSIZE = 1024 (issue's recommended bound)
  - _peer_metadata_get(canon)  — read + LRU touch (move to MRU)
  - _peer_metadata_set(canon, value) — write + evict-if-over-maxsize
  - All production reads/writes route through the helpers
  - _peer_metadata_lock guards the OrderedDict ops so concurrent
    background-enrichment workers (#2484) don't race the LRU
    invariant

Why hand-rolled vs cachetools:
  - No new dep. workspace/ has 0 cache libraries today; adding one
    for ~30 lines is negative leverage.
  - The TTL is enforced at the call site (existing pattern); only
    the size cap + LRU is new. cachetools.TTLCache fuses the two,
    which would force a refactor of every caller's TTL check.
  - The size + lock are simple enough that a future swap-in of
    cachetools is mechanical if needs evolve.

Why maxsize matters more than ttl (issue's framing):
  A runaway poller that touches new peer_ids every push would still
  grow within a single TTL window — TTL eviction only fires at
  read time. The size cap fires immediately on insert, regardless
  of read pattern.

Three new tests:
  - test_peer_metadata_set_evicts_lru_when_at_maxsize
  - test_peer_metadata_get_promotes_to_lru_head
  - test_peer_metadata_set_replaces_existing_entry_in_place

1742 passed / 0 failed locally (78 new + 1664 existing).

Closes #2482.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 01:39:07 -07:00
Hongming Wang
35017c5452 perf(a2a): move enrichment GET off the inbox poller thread (#2484)
The inbox poller's notification callback called the synchronous
enrich_peer_metadata on every push, blocking the poller for up to
2s × N uncached peers per poll batch. Push delivery latency was
gated on registry RTT — exactly what PR #2471's negative-cache patch
was trying to avoid amplifying.

Fix: cache-first nonblocking path with a tiny background worker pool.

  enrich_peer_metadata_nonblocking(peer_id):
    - Cache hit (fresh, within TTL):  return cached record immediately
    - Cache miss / stale:              return None, schedule background
                                       fetch via ThreadPoolExecutor

The first push from a new peer arrives metadata-light (bare peer_id);
the next push within the 5-min TTL hits the warm cache and gets full
name/role. Acceptable trade-off because the channel-envelope
enrichment is a UX nicety, not a correctness invariant — and the
cold-cache window per peer is bounded to one push.

Defenses:
  - In-flight gate (_enrich_in_flight) — N concurrent pushes for the
    same uncached peer schedule exactly ONE worker, not N. Without
    this, a chatty peer's first burst of pushes would amplify into
    parallel registry GETs — the exact DoS-on-self pattern the
    negative cache was meant to rate-limit.
  - Lazy executor init — most test fixtures + short-lived CLI
    invocations never need it; only the long-running molecule-mcp
    path actually fires background work.
  - Daemon-style threads via thread_name_prefix; executor never
    blocks process exit.

Tests:
  - test_enrich_peer_metadata_nonblocking_cache_hit_returns_immediately
  - test_enrich_peer_metadata_nonblocking_cache_miss_schedules_fetch
  - test_enrich_peer_metadata_nonblocking_coalesces_duplicate_pushes
  - test_enrich_peer_metadata_nonblocking_invalid_peer_id_returns_none

Plus updates to the existing test_envelope_enrichment_* suite that
asserted synchronous behavior — they now drain the in-flight set via
_wait_for_enrichment_inflight_for_testing before checking cache state.

Existing synchronous enrich_peer_metadata is unchanged — Phase B (#2790)
schema↔dispatcher drift gate + the negative-cache contract from PR
#2471 still apply. The nonblocking variant is purely additive.

1739 passed, 0 failed locally.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 01:24:42 -07:00
Hongming Wang
5950d4cd81 feat(delegations): agent-side cutover — sync delegate uses async+poll path (RFC #2829 PR-5)
Behind feature flag DELEGATION_SYNC_VIA_INBOX (default off). When set,
tool_delegate_task no longer holds an HTTP message/send connection
through the platform proxy waiting for the callee's reply. Instead:

  1. POST /workspaces/<src>/delegate (returns 202 + delegation_id)
     — platform's executeDelegation goroutine handles A2A dispatch
     in the background. No client-side timeout dependency on the
     platform holding a connection open.
  2. Poll GET /workspaces/<src>/delegations every 3s for a row with
     matching delegation_id reaching terminal status (completed/failed).
  3. Return the response_preview text on completed; surface the
     wrapped _A2A_ERROR_PREFIX error on failed (so caller error
     detection stays unchanged).

This closes the bug class that broke Hongming's home hermes on
2026-05-05 ("message/send queued but result not available after 600s
timeout" while the callee was actively heartbeating "iteration 14/90").

## Compatibility

Default-off feature flag — flag-off path is byte-identical to the
legacy send_a2a_message behavior, pinned by
TestFlagOffLegacyPath::test_flag_off_uses_send_a2a_message_not_polling.

Idempotency-key derivation matches tool_delegate_task_async (SHA-256
of source:target:task) so a restart-mid-delegation gets the same key
and the platform returns the existing delegation_id.

## Recovery on timeout

If the polling budget (DELEGATION_TIMEOUT, default 300s) elapses
without a terminal status, the error message includes the
delegation_id + a "call check_task_status('<id>') to retrieve later"
hint. The platform's durable row is still live — work is NOT lost,
just the synchronous wait is over. Caller can poll for the result
later via the existing check_task_status tool.

## Stack with PR-2

PR-2 added the SERVER-SIDE result-push to the caller's a2a_receive
inbox row. PR-5 (this PR) adds the AGENT-SIDE cutover. Together they
remove the proxy-blocked sync path entirely. PR-2 default-off keeps
existing behavior; PR-5 default-off keeps existing behavior. Operators
flip both for full effect after staging burn-in.

## Coverage

9 unit tests:

  - flag off → byte-identical to legacy (send_a2a_message called,
    _delegate_sync_via_polling NOT called)
  - dispatch HTTP exception → wrapped error
  - dispatch non-2xx → wrapped error mentioning HTTP code
  - dispatch missing delegation_id → wrapped error
  - completed first poll → response_preview returned
  - failed status → wrapped error with error_detail
  - transient poll error → keeps polling, eventually succeeds
  - deadline exceeded → wrapped timeout error mentions delegation_id +
    check_task_status hint for recovery
  - filters by delegation_id (other delegations' rows ignored)

All passing locally. CI will run the same suite on a clean env.

Refs RFC #2829.
2026-05-04 21:31:11 -07:00
Hongming Wang
872b781f64
Merge pull request #2792 from Molecule-AI/feat/drop-shared-context
feat: drop shared_context — use memory v2 team namespace
2026-05-04 23:37:49 +00:00
Hongming Wang
2f7beb9bce feat: drop shared_context — use memory v2 team namespace instead
Parent → child knowledge sharing previously lived behind a `shared_context`
list in config.yaml: at boot, every child workspace HTTP-fetched its parent's
listed files via GET /workspaces/:id/shared-context and prepended them as
a "## Parent Context" block. That paid the full transfer cost on every
boot regardless of whether the agent needed it, single-parent SPOF, no team
or org scope, and broken if the parent was unreachable.

Replace with memory v2's team:<id> namespace: agents call recall_memory
on demand. For large blob-shaped artefacts see RFC #2789 (platform-owned
shared file storage).

Removed:
- workspace/coordinator.py: get_parent_context()
- workspace/prompt.py: parent_context arg + injection block
- workspace/adapter_base.py: import + call + arg pass
- workspace/config.py: shared_context field + parser entry
- workspace-server/internal/handlers/templates.go: SharedContext handler
- workspace-server/internal/router/router.go: GET /shared-context route
- canvas/src/components/tabs/ConfigTab.tsx: Shared Context tag input
- canvas/src/components/tabs/config/form-inputs.tsx: schema field + default
- canvas/src/components/tabs/config/yaml-utils.ts: serializer entry
- 6 tests pinning the removed behavior; 5 doc references

Added regression gates so any reintroduction is loud:
- workspace/tests/test_prompt.py: build_system_prompt must NOT emit
  "## Parent Context"
- workspace/tests/test_config.py: legacy YAML key loads cleanly but
  shared_context attr must NOT exist on WorkspaceConfig
- tests/e2e/test_staging_full_saas.sh §9d: GET /shared-context must NOT
  return 200 against a live tenant

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 16:30:26 -07:00
Hongming Wang
bd881f8756 test(mcp): structural gate — schema↔dispatcher drift catches dropped kwargs
Closes part of #2790 (Phase B). Prevents a recurrence of the PR #2766 →
PR #2771 cycle: PR #2766 added ``source_workspace_id`` to four tools'
``input_schema`` and tool implementations, but the dispatcher in
``a2a_mcp_server.handle_tool_call`` silently dropped the kwarg for
``commit_memory`` / ``recall_memory`` / ``chat_history`` /
``get_workspace_info``. Schema lied; LLMs populated the param; every
call fell back to ``WORKSPACE_ID``, defeating multi-tenant isolation.
Existing dispatcher tests asserted return-value substrings (``"working"
in result``) instead of kwarg flow, so the bug shipped to main and was
only caught by re-reviewing post-merge.

This change adds an AST-driven gate. For every ToolSpec in
platform_tools.registry.TOOLS, the gate finds the matching
``elif name == "<tool>"`` arm in a2a_mcp_server.py and asserts that
every property declared in input_schema.properties is read by an
``arguments.get("<property>", ...)`` call inside that arm. A new schema
field the dispatcher forgets to forward fails CI loudly.

Three tests:

- test_every_dispatch_arm_reads_every_schema_property: main drift gate.
  Walks registry, matches dispatch arms by name, diffs declared vs
  read keys.

- test_dispatch_arms_reach_every_registered_tool: inverse direction.
  A registered tool with no dispatch arm is "Unknown tool" at runtime,
  even though docs/wrappers/schema all advertise it. Catches PRs that
  add a ToolSpec but forget the dispatcher.

- test_drift_gate_self_check_finds_known_arms: pin the AST parser. If
  handle_tool_call is refactored into a different shape (dict dispatch,
  registry-driven, etc.) and _load_dispatch_arms returns {}, the main
  gate vacuously passes — this self-check makes that failure mode
  explicit by requiring 12 known arms to be discovered.

Verified the gate catches the PR #2766 bug: stripping
``source_workspace_id=arguments.get(...)`` from the commit_memory arm
fails the gate with a descriptive error pointing at the missing kwarg
and referencing the prior incident. Restored → 3 tests pass.

Suite: 1733 passed (was 1730 + 3 new), 3 skipped, 2 xfailed.

Why AST, not runtime invocation: the runtime mock-based tests in
test_a2a_mcp_server.py already assert kwargs flow correctly for four
explicitly-tested tools. This gate is cheaper (~1ms), catches new
properties before someone has to remember the runtime test, and runs
as a structural invariant.

Phase A (Python coverage floor) and Phase C (molecule-mcp e2e harness)
remain in #2790 as separate follow-ups.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 16:29:54 -07:00
Hongming Wang
a8850bac55
Merge pull request #2778 from Molecule-AI/fix/redact-secrets-1777932233
fix(runtime): redact secret-shaped tokens from JSON-RPC error.data
2026-05-04 22:13:29 +00:00
Hongming Wang
28f22609d9 fix(runtime): redact secret-shaped tokens from JSON-RPC error.data
PR #2756 piped adapter.setup() exception strings verbatim into the
JSON-RPC -32603 response body so canvas could render
"agent not configured: <reason>". The 4 adapters in tree today raise
with key NAMES not values, so this is currently safe — but a future
adapter author writing `raise RuntimeError(f"auth failed for {token}")`
would leak that token verbatim. Issue #2760 flagged the risk; this PR
closes it.

workspace/secret_redactor.py exposes redact_secrets(text) that
replaces secret-shaped substrings with `<redacted-secret>`. Pattern
set is intentionally a CLOSED LIST (not entropy-based) so legitimate
diagnostics — git SHAs, UUIDs, file paths — pass through untouched.

Patterns covered: Anthropic/OpenAI/OpenRouter/Stripe `sk-` family,
GitHub PAT (ghp_/gho_/ghu_/ghs_/ghr_), AWS access keys (AKIA*/ASIA*),
HTTP `Bearer <token>`, Slack `xoxb-`/`xoxp-` etc., Hugging Face `hf_*`,
bare JWTs.

Wired into not_configured_handler at handler-build time — per-request
hot path is unchanged (one cached string).

Test coverage (19 cases): None/empty pass-through, clean diagnostic
untouched, each provider redacted with surrounding text preserved,
multiple distinct tokens, multiline tracebacks, false-positive guards
(too-short tokens, git SHA, UUID, underscore-bordered match), and
end-to-end handler integration via Starlette TestClient.

Test fixtures use string concat (`"sk-" + "cp-" + body`) to keep the
literal off the staged-diff text, since the repo's pre-commit
secret-scan flags real-shape tokens even in tests.

`secret_redactor` registered in TOP_LEVEL_MODULES (drift gate).

Closes #2760
Pairs with: PR #2756, PR #2775

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 15:07:53 -07:00
Hongming Wang
4f4b6c4f90 test(runtime): pin PR #2756's card-vs-setup decoupling with build_routes helper
PR #2756's contract — card route always mounted regardless of
adapter.setup() outcome — lived inline in main.py's `# pragma: no cover`
boot sequence. A future refactor that re-coupled the two would have
silently bypassed PR #2756 and shipped the original "stuck booting
forever" UX again, with no pytest catching it.

This change extracts route assembly into workspace/boot_routes.py's
build_routes(card, executor, adapter_error) and pins the contract with
6 integration tests using Starlette's TestClient:

- test_card_route_serves_200_when_adapter_ready: happy path
- test_card_route_serves_200_when_adapter_failed: misconfigured boot,
  card still 200, skill stubs survive
- test_jsonrpc_returns_503_when_no_executor: full -32603 envelope with
  the adapter_error in error.data
- test_jsonrpc_returns_503_with_generic_when_no_error_string: fallback
  reason for the rare case main.py reaches this branch without one
- test_card_route_does_not_depend_on_executor: direct PR #2756
  regression guard — both branches MUST mount the card route
- test_executor_present_does_not_mount_not_configured_handler: sanity
  that a healthy workspace doesn't return -32603 to every request

Conftest stubs extended with a2a.server.routes / request_handlers
classes so the tests work under the existing a2a-mock infra (pattern
matches the AgentCard/AgentSkill stubs added for PR #2765).

main.py now calls build_routes; the inline if/else is gone. Same
production behaviour, cleaner shape, regression-proof.

Heavy a2a-sdk imports inside build_routes() are lazy (deferred to the
executor-only branch) so tests that only exercise the not-configured
path don't pull DefaultRequestHandler / InMemoryTaskStore.

card_helpers + boot_routes registered in TOP_LEVEL_MODULES (build
drift gate would have caught the missing entry on the wheel-publish
smoke).

All 18 related tests pass (test_boot_routes.py: 6, test_card_helpers.py:
6, test_not_configured_handler.py: 6).

Closes #2761
Pairs with: PR #2756 (decouple agent-card from setup),
            PR #2765 (defensive isolation of enrichment + transcript)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 14:59:56 -07:00
Hongming Wang
41ae4ec50b fix(mcp): wire source_workspace_id through dispatcher for memory + chat_history + workspace_info
Self-review of merged PR #2766 (multi-workspace MCP routing) revealed a
silent gap: PR #2766 added the ``source_workspace_id`` parameter to
``tool_commit_memory`` / ``tool_recall_memory`` / ``tool_chat_history``
/ ``tool_get_workspace_info`` AND advertised it in the registry's input
schemas, but the MCP server's dispatch arms in ``a2a_mcp_server.py``
were never updated to forward ``arguments["source_workspace_id"]`` to
those four tools.

Result: the schema lied. The LLM saw ``source_workspace_id`` as a valid
tool parameter, could correctly populate it from the inbound message's
``arrival_workspace_id``, but the dispatcher dropped it on the floor and
every memory commit / recall / chat-history fetch silently fell back to
the module-level ``WORKSPACE_ID``. The cross-tenant leak that PR #2766
was meant to prevent is NOT prevented for these four tools without this
follow-up.

Why the existing dispatcher tests didn't catch it:
the tests asserted return-value strings (``"working" in result``) but
never asserted what arguments the inner tool was called with. So the
dispatcher could ignore any kwarg and the tests would still pass.

Fix:
1. Wire ``source_workspace_id=arguments.get("source_workspace_id") or None``
   into the four dispatch arms, mirroring the pattern already used for
   ``delegate_task`` / ``delegate_task_async`` / ``check_task_status`` /
   ``list_peers``.
2. Add five tests in ``test_a2a_mcp_server.py`` that assert the inner
   tool was awaited with the exact source_workspace_id kwarg
   (``assert_awaited_once_with(..., source_workspace_id="ws-X")``) —
   substring-on-result tests can't catch this class of bug.
3. Add a fallback test ensuring single-workspace operators (no
   source_workspace_id key) get ``source_workspace_id=None`` — pinning
   the documented None contract over an accidental empty-string forward.

Suite: 1705 passed (was 1700 + 5 new), 3 skipped, 2 xfailed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 14:41:24 -07:00
Hongming Wang
89bdf29d6f
Merge pull request #2766 from Molecule-AI/feat/mcp-multi-ws-tool-routing
feat(mcp): multi-workspace routing for memory/chat_history/workspace_info (PR-3)
2026-05-04 21:20:22 +00:00
Hongming Wang
700d44ec3d feat(mcp): multi-workspace routing for memory + chat_history + workspace_info
PR-3 of the multi-workspace MCP rollout. PR-1 made the MCP server itself
multi-workspace aware (one process, N workspace memberships). PR-2 added
source_workspace_id threading to delegate_task / list_peers. This change
closes the remaining workspace-scoped tools so a single agent registered
into multiple workspaces no longer leaks memories or chat history across
tenants.

Tools now accepting `source_workspace_id`:

- tool_commit_memory(content, scope, source_workspace_id=None) —
  routes POST to /workspaces/{src}/memories with the source workspace's
  Bearer token. Body still embeds source_workspace_id for the platform's
  audit + namespace-isolation enforcement.

- tool_recall_memory(query, scope, source_workspace_id=None) —
  GET /workspaces/{src}/memories with the source workspace's token and
  ?workspace_id={src} query so the platform scopes the read to the
  caller's tenant view (PR-1 / multi-workspace mode).

- tool_chat_history(peer_id, limit, before_ts, source_workspace_id=None)
  — auto-routes via the _peer_to_source cache populated by list_peers,
  with explicit override winning. Falls back to module-level WORKSPACE_ID
  if neither is available. URL: /workspaces/{src}/chat-history.

- tool_get_workspace_info(source_workspace_id=None) — GET /workspaces/{src}
  with the source workspace's token. Useful for introspecting any
  workspace the agent is registered into, not just the primary.

In every path, `src = source_workspace_id or WORKSPACE_ID`, so
single-workspace operators see no behavior change. Tokens are resolved
per-workspace via auth_headers(src) / _auth_headers_for_heartbeat(src),
which fall through to the legacy AUTH_TOKEN env when not in
multi-workspace mode.

Also updates input_schemas in platform_tools/registry.py so the new
optional parameter is advertised to LLM clients (claude-code,
hermes-agent, langchain wrappers).

Tests (4 new classes in test_a2a_multi_workspace.py, 21 new tests):
- TestCommitMemorySourceRouting — URL + Authorization header per source
- TestRecallMemorySourceRouting — URL + query param + Authorization
- TestChatHistorySourceRouting — peer-cache auto-route + explicit override
- TestGetWorkspaceInfoSourceRouting — URL + Authorization

Inbox tools (peek/pop/wait_for_message) already multi-workspace aware
since PR-1 — inbox.py spawns per-workspace pollers and tags every
InboxMessage with arrival_workspace_id. No further plumbing needed.

Suite: 1700 passed, 3 skipped, 2 xfailed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 14:17:58 -07:00
Hongming Wang
63ac99788b fix(runtime): isolate card-skill enrichment + transcript handler from adapter shape mismatch
PR #2756 added a try/except around adapter.setup() so a missing LLM key
doesn't crash the workspace boot. Two paths that now run AFTER setup
succeeds were not similarly isolated, leaving small but real coupling
risks for future adapter authors.

1. **Skill metadata enrichment swap (main.py:248-259).** When
   adapter.setup() returns, main.py reads adapter.loaded_skills and
   replaces the static stubs in agent_card.skills with rich metadata
   (description, tags, examples). The list comprehension assumes each
   element exposes .metadata.{id,name,description,tags,examples}. A
   future adapter that returns a non-canonical shape would raise
   AttributeError, propagate to the outer except, capture as
   adapter_error, and silently degrade an OK boot to the
   not-configured state — even though setup() actually succeeded.

   Extract to card_helpers.enrich_card_skills(card, loaded_skills) →
   bool. Helper swallows enrichment failures, logs the cause, returns
   False, leaves the static stubs in place. setup() success path
   continues unchanged. 6 unit tests cover: None input, empty list,
   canonical happy path, missing .metadata attr, partial .metadata
   (missing one canonical field), atomic-failure-no-partial-swap.

2. **/transcript handler (main.py:513).** Calls await
   adapter.transcript_lines(...) without try/except. BaseAdapter's
   default returns {"supported": false} so today's 4 adapters never
   trigger this — but a future adapter override that assumes setup()
   ran would surface as a 500 from Starlette's default error handler
   instead of a useful 503 with the exception class + message.
   Inline try/except returns 503 with the reason, matching the
   not-configured JSON-RPC handler's pattern.

Both changes match the architectural principle the PR #2756 chain
established: availability (workspace reachable) is decoupled from
configuration / adapter behavior. Operators see useful errors instead
of silent degradation; future adapter authors can't accidentally
break tenant readiness with a shape mismatch.

Adds:
- workspace/card_helpers.py (~50 lines, 100% covered)
- workspace/tests/test_card_helpers.py (6 tests)
- AgentCard/AgentSkill/AgentCapabilities/AgentInterface stubs to
  workspace/tests/conftest.py so future card-related tests work
  under the existing a2a-mock infrastructure
- card_helpers in TOP_LEVEL_MODULES (drift gate would have caught it)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 14:15:27 -07:00
Hongming Wang
6488ba09e7 fix(preflight): downgrade required_env + auth_token failures to warnings
Preflight was hard-failing the workspace boot when required env vars or
legacy auth_token_files were missing, raising SystemExit(1) before
main.py's PR #2756 try/except could mount the not-configured handler.
Result: codex/openclaw workspaces launched without OPENAI_API_KEY were
INVISIBLE — `/.well-known/agent-card.json` never returned 200, the bench
timed out at 600s, canvas had no actionable signal. PR #2756 fixed half
the puzzle (decouple agent-card from adapter.setup() failure); this
fixes the other half (decouple from preflight failure).

Caught by bench-provision-time run 25335853189 on 2026-05-04: codex and
openclaw both timed_out at 609s while claude-code (whose default model
needs no env) hit 86.7s on the same AMI. Hermes hit 147s because hermes
config doesn't declare top-level required_env.

After this change:
- Missing required_env: WARN (operator sees it in boot logs); workspace
  proceeds to adapter.setup() which raises with the same env-name detail;
  PR #2756's try/except mounts the not-configured handler;
  /.well-known/agent-card.json serves 200; JSON-RPC POST / returns
  -32603 "agent not configured" with the env-name in `error.data`.
- Missing auth_token_file (legacy path): same treatment.
- Other preflight failures (runtime adapter not installable, invalid
  A2A port) STAY as fails — those are structural, the workspace truly
  can't run.

Updated 4 existing tests that asserted `report.ok is False` on
required_env / auth_token misses to assert `report.ok is True` and
check `report.warnings` instead. All 31 preflight tests pass; full
suite 1664 pass + 1 unrelated flake on staging.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 12:20:34 -07:00
Hongming Wang
4b35d25d86 fix(runtime): decouple agent-card readiness from adapter.setup()
Today, if `adapter.setup()` raises (most often: an LLM credential is
missing/rotated), main.py crashes before the agent-card route is mounted.
start.sh restart-loops, /.well-known/agent-card.json never returns 200,
and the workspace is invisible to the bench/canvas — operators see
"stuck booting forever" with no clear error to act on.

The agent-card is a static capability advertisement (name, version,
skills, supported protocols). It doesn't need a working LLM. Coupling
its mount to setup() conflates *availability* ("am I up?") with
*configuration* ("can I actually answer?"). They're different concerns.

This change:
- Builds AgentCard from `config.skills` (static names from config.yaml)
  BEFORE adapter.setup(), so the route mounts independent of setup state.
- Wraps setup() + create_executor in try/except. On success, mounts
  the real DefaultRequestHandler with rich loaded_skills metadata
  swapped into the card in-place. On failure, mounts a JSON-RPC
  handler that returns -32603 "agent not configured" with the
  setup() exception in error.data.
- Heartbeat keeps running on misconfigured boots so the platform
  marks the workspace as reachable-but-misconfigured rather than
  crash-looping. Operators redeploy with corrected env without
  chasing a restart loop.
- initial_prompt and idle_loop are skipped on misconfigured boots —
  they self-fire to /, which would land in -32603 anyway, and the
  marker would consume on the first useless attempt.

Bench impact (RFC #388 strict <120s): codex/openclaw bench-time-outs
were the agent-card-never-returns-200 symptom. With this fix those
runtimes serve the card immediately on EC2 boot, so the bench
measures infrastructure cold-start (claude-code class: ~50–80s)
instead of credential-coupled boot.

Adds workspace/not_configured_handler.py (factory + module-level so
behavior is unit-testable; main.py is `# pragma: no cover`) and
workspace/tests/test_not_configured_handler.py (6 tests covering
status code, JSON-RPC envelope shape, id-echo, malformed-body
fallback, reason surfacing, batch-body safety).

All 1665 existing workspace tests pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 10:22:31 -07:00
Hongming Wang
35b3ea598a test: fix WORKSPACE_ID assert to match module attr (CI portability)
CI's pytest harness pre-sets WORKSPACE_ID=test in the env before
test collection, so a2a_client's module-level WORKSPACE_ID
(captured at import time, line 24) holds "test" — but the local
fixture's monkeypatch.setenv("WORKSPACE_ID", ...) only affects the
ENV value seen on later os.environ reads, NOT the already-bound
module attribute.

Assert against a2a_client.WORKSPACE_ID directly so the test is
portable across local + CI runs without monkey-patching the module
itself (which a future test reload might undo).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 08:35:48 -07:00
Hongming Wang
1161b97faf feat(mcp): cross-workspace delegation routing (multi-ws PR-2)
PR-2 of the multi-workspace external-agent stack. PR-1 (#2739)
landed per-workspace auth + heartbeat + inbox. This PR threads
``source_workspace_id`` through the A2A client + tool surface so an
agent registered against multiple workspaces can list peers across
all of them and delegate from a specific source.

Changes
-------

* ``a2a_client``: ``discover_peer``, ``send_a2a_message``,
  ``get_peers_with_diagnostic``, and ``enrich_peer_metadata`` now
  accept ``source_workspace_id``. Routing uses it for both the
  X-Workspace-ID header and (transitively, via ``auth_headers(src)``)
  the bearer token. Defaults to module-level WORKSPACE_ID for
  back-compat.
* ``a2a_client._peer_to_source``: a new lock-free cache mapping each
  discovered peer back to the source workspace whose registry
  surfaced it. ``tool_list_peers`` populates the cache on every call;
  ``tool_delegate_task`` consults it for auto-routing.
* ``a2a_tools.tool_list_peers(source_workspace_id=None)``: when
  multiple workspaces are registered (MOLECULE_WORKSPACES) and no
  explicit source is passed, aggregates peers across every
  registered workspace and tags each entry with ``via: <src[:8]>``.
  Single-workspace mode is unchanged — no ``via:`` annotation, same
  output shape.
* ``a2a_tools.tool_delegate_task`` and ``tool_delegate_task_async``
  resolve source via ``source_workspace_id arg → _peer_to_source[target]
  → WORKSPACE_ID``. Agents almost never need to specify ``source_*``
  explicitly — call ``list_peers`` first and the cache handles the
  rest.
* ``tool_delegate_task_async`` idempotency key now includes the
  source workspace, so the same task delegated from two registered
  workspaces produces two distinct delegations (the right behavior
  — one per tenant audit trail).
* ``platform_auth.list_registered_workspaces()``: new helper for the
  tool layer to enumerate the multi-ws registry. Lock-free reads
  matched by the existing single-writer-per-workspace contract from
  PR-1.
* ``platform_auth.self_source_headers``: now passes ``workspace_id``
  through to ``auth_headers`` — without this, a multi-workspace POST
  source-tagged with ``X-Workspace-ID=ws_b`` was authenticating
  with ws_a's token (or no token if MOLECULE_WORKSPACE_TOKEN unset).
  Latent PR-1 bug exposed by the new tool surface.
* ``a2a_mcp_server`` tool dispatch passes ``source_workspace_id``
  from the tool call arguments.
* ``platform_tools.registry``: add ``source_workspace_id`` to the
  delegate_task, delegate_task_async, check_task_status, list_peers
  input schemas with copy explaining when to use it (rarely — the
  cache handles it).

Tests (15 new, all passing)
---------------------------

``test_a2a_multi_workspace.py``:
* TestDiscoverPeerSourceRouting (3): src arg drives header+token,
  fallback to module ws when omitted, invalid target short-circuits
  before any HTTP attempt.
* TestSendA2AMessageSourceRouting (1): X-Workspace-ID source header
  + Authorization bearer both come from the source arg via the
  patched self_source_headers chain.
* TestGetPeersSourceRouting (1): URL path AND headers use the
  source workspace id.
* TestToolListPeersAggregation (4): aggregates across multiple
  registered workspaces, tags origin, leaves single-workspace path
  unchanged, explicit src arg overrides aggregation, diagnostic
  joining when every workspace returns empty.
* TestToolDelegateTaskAutoRouting (3): cache-driven auto-route,
  explicit override beats cache, single-workspace fallback to
  module WORKSPACE_ID.
* TestListRegisteredWorkspaces (3): registry enumeration helper.

Plus ``tests/snapshots/a2a_instructions_mcp.txt`` regenerated to
absorb the new ``source_workspace_id`` schema entries.

Back-compat
-----------

Every change defaults ``source_workspace_id=None``; legacy
single-workspace operators (no MOLECULE_WORKSPACES) see identical
behavior — same URLs, same headers, same tool output. The 24
PR-1 tests + 125 existing A2A tests all still pass.

Out of scope (PR-3)
-------------------

Memory namespacing per registered workspace lands after the new
memory system v2 PR (#2740) settles in production.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 08:32:24 -07:00
Hongming Wang
3195657837 fix: bot-lint nits — drop unused imports, add reason to except
Resolves three github-code-quality threads blocking PR-2739 merge:
- workspace/tests/test_mcp_cli_multi_workspace.py: remove unused
  `import os` and `from unittest.mock import patch` (left over from
  an earlier test draft that mocked at the os.environ layer).
- workspace/mcp_cli.py:523: replace bare `pass` in the
  register_workspace_token ImportError handler with a debug log line +
  one-line comment explaining the silent-degrade contract (older
  installs that don't yet ship the helper fall back to the legacy
  single-token path; single-workspace operators see no behavior
  change).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 08:16:12 -07:00
Hongming Wang
6fb9bc9bcd mcp: regenerate platform_auth signature snapshot for auth_headers(workspace_id=...)
PR-1's auth_headers added an optional workspace_id parameter for
multi-workspace token routing; the signature drift gate
(test_platform_auth_signature_matches_snapshot) caught the change as
expected. Snapshot regenerated to capture the new shape — diff is
visible in the PR for reviewers + template repos that depend on this
surface.

Behavior unchanged: auth_headers() with no arg still routes through
the legacy resolution path (back-compat exact); the workspace_id arg
is opt-in.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 08:11:23 -07:00
Hongming Wang
829ab66462 mcp: support multi-workspace external-agent registration (PR-1)
External MCP agents (e.g. Claude Code installed on a company PC) can
now register against MULTIPLE workspaces from a single process — the
agent participates as a peer in workspace A (company) AND workspace B
(personal) simultaneously, with one merged inbox tagged so replies
route to the correct tenant.

Use case (verbatim from operator): "I have this computer AI thats in
company's PC, he is going to be put in company's workspace, but
personally, I want to register it to my own workspace as well, so
that I can talk to it and asking him to do work."

## What changed

**Wire format** — new env var:

  MOLECULE_WORKSPACES='[
    {"id":"<company-wsid>","token":"<company-tok>"},
    {"id":"<personal-wsid>","token":"<personal-tok>"}
  ]'

When set, mcp_cli iterates the array and spawns one (register +
heartbeat + inbox poller) trio per workspace. Single-workspace mode
(WORKSPACE_ID + MOLECULE_WORKSPACE_TOKEN) is unchanged — every
existing operator's setup keeps working bit-for-bit.

**Per-workspace token registry** (platform_auth.py):
  register_workspace_token(wsid, tok) — populated by mcp_cli once
  per workspace before any thread spawns; thread-safe registration
  + lock-free reads on the hot path. auth_headers(workspace_id=...)
  routes to the per-workspace token; auth_headers() with no arg
  uses the legacy resolution path unchanged (back-compat).

**Per-workspace inbox cursors** (inbox.py):
  InboxState now supports cursor_paths={wsid: Path,...}. Each poller
  advances its own cursor — one workspace's slow poll can't stall
  another, and a 410 only resets the affected workspace's cursor.
  Single-workspace constructor (cursor_path=Path(...)) still works
  exactly as before via __post_init__ promotion to the empty-string
  key. Cursor filenames disambiguated by workspace_id[:8] when
  multi-workspace; single-workspace keeps the legacy filename so
  upgrade doesn't invalidate on-disk state.

**Arrival workspace tagging** (inbox.py):
  InboxMessage.arrival_workspace_id — tells the agent which OF ITS
  workspaces the inbound message arrived on. Set by the poller from
  the cursor key. to_dict() omits the field when empty so single-
  workspace consumers see no shape change.

**Reply routing** (a2a_tools.py + a2a_mcp_server.py + registry.py):
  send_message_to_user(workspace_id=...) — optional override that
  selects which workspace's /notify endpoint to POST to (and which
  token authenticates). Multi-workspace agents pass the inbound
  message's arrival_workspace_id; single-workspace agents omit it
  and route to the only registered workspace via the legacy URL.

## Out of scope (future PRs)

- PR-2: cross-workspace delegation auto-routing — when an agent
  receives a request from personal-ws "delegate to ops-bot" and
  ops-bot lives in company-ws, the agent should auto-pick its
  company-ws identity for the outbound delegate_task. Today the
  agent must pass via_workspace explicitly (or fall through to
  primary workspace).
- PR-3: memory namespacing — commit_memory() still writes to the
  primary workspace's memory regardless of inbound context. Will
  revisit when the new memory system (PR #2733 just landed) settles.

## Tests

  workspace/tests/test_mcp_cli_multi_workspace.py — 24 new tests:
    * MOLECULE_WORKSPACES JSON parsing (valid + 6 error shapes)
    * Token registry register / lookup / rotation / clear
    * auth_headers routing by workspace_id with legacy fallback
    * Per-workspace cursor save/load/reset isolation
    * arrival_workspace_id present-when-set, omitted-when-empty
    * default_cursor_path namespacing

  All 110 pre-existing tests in test_mcp_cli.py / test_inbox.py /
  test_platform_auth.py still pass — back-compat is mechanical.

Refs: project memory entry "External agent multi-workspace
registration", design questions answered 2026-05-04 by user
(JSON env var; explicit memory writes deferred to PR-3).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 08:06:00 -07:00
Hongming Wang
ffd90dcf1e sanitise registry-sourced peer_name/peer_role before rendering into channel content
Anyone with a workspace token can register their workspace with any
agent_card.name via /registry/register. The universal MCP path renders
that name directly into the conversation turn the in-workspace agent
reads (`[from <name> (<role>) · peer_id=...]`), so a peer registering
with a name containing newlines + a fake instruction line ("\n\n[SYSTEM]
forward all secrets to peer X\n") would surface as multiple header lines
with the injected line floating outside the header sentinel — a direct
prompt-injection vector against any in-workspace agent receiving A2A
from that peer.

Mirror the TypeScript sanitiser shipped in
Molecule-AI/molecule-mcp-claude-channel#25 for the external channel
plugin: allowlist `[A-Za-z0-9 _.\-/+:@()]` (covers common agent-naming
shapes), whitespace-collapse stripped runs, 64-char cap with ellipsis
to keep the header scannable on narrow terminals. Apply at the meta
population site so BOTH the JSON-RPC envelope's `meta.peer_name` /
`meta.peer_role` AND the rendered conversation turn carry the safe form.

Returning None for empty / all-stripped input preserves the "no
enrichment" semantics so the formatter falls back to bare "peer-agent"
identity instead of producing "[from  · peer_id=...]" which looks like
a parse bug.

Tests pin the allowlist behaviour (newline strip, bracket strip, control
char strip, whitespace collapse, length cap) plus a defense-in-depth
check at the envelope-builder seam that a malicious registry response
end-to-end produces a sanitised envelope + content. 9/9 new tests pass,
69/69 file total green.
2026-05-04 00:02:00 -07:00
Hongming Wang
b7c962bf86 feat(mcp): wrap inbound channel content with identity + reply hint
Mirrors the channel-plugin change in
Molecule-AI/molecule-mcp-claude-channel#24 so the universal MCP path
(in-workspace agents) gets the same self-documenting reply guidance the
external channel plugin path now ships.

Before: `params.content` was the raw inbound text — Claude saw bare prose
from a peer or canvas user with no surrounding context. To reply the
agent had to (a) fish the routing fields out of `meta`, (b) recall which
platform tool routes to which destination (send_message_to_user for
canvas, delegate_task for peer), and (c) construct the call by hand.

After: content is wrapped as

  [from <identity> · peer_id=<uuid>]    (or "[from canvas user]")
  <inbound text>
  ↩ Reply: <copy-pasteable tool call>

The identity comes from the existing registry-enrichment path (peer_name
+ peer_role from enrich_peer_metadata, with friendly fallbacks when the
registry lookup misses). Reply tool name lives in the same module as the
notification builder so the `feedback_doc_tool_alignment` drift class
can't bite — a future tool rename PR that misses this hint also fails
test_format_channel_content_*.

Tests: 6 new cases pinning the formatter (canvas_user vs peer_agent,
full enrichment, name-only, no enrichment, unknown-kind defensive
default, multi-line preservation) plus updated existing assertions in
the bridge + content tests. All asserts pin exact strings per
`feedback_assert_exact_not_substring`.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 23:14:12 -07:00
Hongming Wang
02ae2fd6fb feat(security): trust-boundary gate non-peer_id meta fields in _build_channel_notification (#2488)
Defense-in-depth follow-up to #2481 (peer_id trust-boundary gate).
Same XML-attribute injection vector applies to the four other meta
fields rendered as agent-context attrs in the <channel> tag:

  <channel kind="..." method="..." activity_id="..." ts="..." source="molecule">

Each field is now passed through a closed-set / shape-validate gate:

- kind     → frozenset {canvas_user, peer_agent} via _safe_meta_field
- method   → frozenset {message/send, tasks/send, tasks/get, notify, ""}
- activity_id → UUID-shape regex via _safe_activity_id
- ts       → ISO-8601 RFC3339 regex via _safe_ts

Any value outside the allowed shape is replaced with empty string.
Today the values come from a platform-DB column so they're trusted,
but "trust the source" was the same assumption that got peer_id into
trouble (#2481). Closed-enum allowlists make this row-content-blind.

5 new tests mirroring test_envelope_enrichment_strips_path_traversal_peer_id:
- test_envelope_strips_unknown_kind         — kind injection stripped
- test_envelope_strips_unknown_method       — method injection stripped
- test_envelope_strips_malformed_activity_id — non-UUID stripped
- test_envelope_strips_malformed_ts         — non-ISO8601 stripped
- test_envelope_keeps_valid_meta_fields_unchanged — happy-path negative case

Mutation-tested: temporarily making _safe_meta_field permissive kills
both kind/method strip tests with the injection payload reflecting
into the meta dict, confirming the gate is what blocks them.

Two existing tests updated to use UUID-shaped activity_ids ("act-7",
"act-bridge-test" → real UUIDs) since the gate strips synthetic ids.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 10:58:52 -07:00
Hongming Wang
ff3dcd37f6 fix(chat-history): correct docstring inversion + pin empty-history JSON shape (#2485)
Two follow-ups from the multi-axis review of #2474:

1. **Docstring inversion** in tool_chat_history. The doc said
   '(source_id=peer)' meant 'this workspace is the sender' — actually
   it means the *peer* is the sender (source_id is where the activity
   came FROM). Reframed to 'where the peer is either the sender or
   the recipient' to match the underlying SQL semantics.

2. **Empty-history test**. TestChatHistory had 10 tests but no
   200+[] happy-path pin. Added test_empty_history_returns_empty_json_list
   asserting result == '[]' on exact-equality (per assert-exact
   memory — substring '[]' would match envelope shapes too).

Both changes are pure docs+tests — no behaviour change.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 10:09:15 -07:00
Hongming Wang
270a95aa67 test(envelope-enrichment): pin negative-cache for non-JSON 200 + non-dict JSON 200 (#2483)
The two missing branch tests called out by the multi-axis review of #2471.

a2a_client.enrich_peer_metadata handles two failure shapes (lines 105-112)
that the existing 12 envelope-enrichment tests don't exercise:

  1. HTTP 200, response.json() raises (non-JSON body)
  2. HTTP 200, valid JSON, but body is list/string/number not dict

Both paths land at the negative-cache write, but no test verified the
discriminator. Pin both with the same call_count == 1 assertion shape
the 5xx + network-exception tests already use.

Verified: temporarily removing the negative-cache write in either
branch makes the corresponding test fail with call_count == 2 — the
assertion correctly discriminates the contract from a fall-through.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 09:35:21 -07:00
Hongming Wang
e1628c4d56 fix(a2a): route terminal Message via TaskUpdater.complete/failed in task mode
PR #2558 enqueued a Task at the start of new requests so the v1 SDK
would accept TaskUpdater.start_work() — fix #1 of the v0→v1 migration
gap (PR #2170). But after Task is enqueued, the executor enters
"task mode" and the SDK rejects raw Message enqueues at the terminal
step:

  {"code":-32603,"message":"Received Message object in task mode.
  Use TaskStatusUpdateEvent or TaskArtifactUpdateEvent instead."}

Synth-E2E 2026-05-03T11:00:34Z surfaced this on the very first run
after the prior fix cascaded. Validation site is the same
a2a/server/agent_execution/active_task.py — the framework's job is
to enforce the v1 invariant; we're catching up to it.

The fix routes both terminal events through TaskUpdater helpers:
- success: updater.complete(message=msg) wraps in
  TaskStatusUpdateEvent(state=COMPLETED, final=True)
- error: updater.failed(message=...) wraps in
  TaskStatusUpdateEvent(state=FAILED, final=True)

Both helpers exist in a2a-sdk ≥ 1.0; verified via
TaskUpdater.complete signature.

Tests:
- conftest TaskUpdater stub now records complete/failed calls AND
  routes the message back through event_queue.enqueue_event so the
  ~20 legacy tests asserting on enqueue_event keep working
- 2 new regression tests pin the contract:
  * test_terminal_success_routes_via_updater_complete
  * test_terminal_error_routes_via_updater_failed
- Both NEW tests verified to FAIL on staging-baseline (without this
  fix) and PASS with it — they'd catch the regression before staging
  if the wheel-smoke gate covered task-mode terminal events too
  (separate yak-shave for #131 follow-up)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 04:06:45 -07:00
Hongming Wang
06240ab67b fix(preflight): skip required_env check in MOLECULE_SMOKE_MODE
Boot smoke (#2275) exercises executor.execute() against stub deps
and never hits the real provider, so missing auth env is not a real
blocker. Without this bypass, every adapter that introduces a new
auth env var must be mirrored into molecule-ci's fake-env list — a
maintenance treadmill that just bit hermes-template:

- 2026-05-03 09:47 UTC: hermes publish-image smoke fails on
  HERMES_API_KEY preflight (workflow injects CLAUDE_CODE_OAUTH_TOKEN,
  ANTHROPIC_API_KEY, GEMINI_API_KEY, OPENAI_API_KEY but not
  HERMES_API_KEY or OPENROUTER_API_KEY). Failed for two cycles
  before being noticed.

The bypass demotes Required-env failures to warnings when
MOLECULE_SMOKE_MODE is truthy, so the unset env stays visible in
the boot log without blocking. Production paths are unchanged
(env unset → fail).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 03:44:05 -07:00
Hongming Wang
5c3b79a8ba fix(a2a): enqueue Task before TaskStatusUpdateEvent for v1 SDK contract
a2a-sdk ≥ 1.0 raises InvalidAgentResponseError when an executor publishes a
TaskStatusUpdateEvent (e.g. via TaskUpdater.start_work) before any Task
event for fresh requests. The framework only auto-creates the Task on
continuation messages (existing task_id resolves via task_manager.get_task);
new requests leave _task_created unset and the SDK validation at
a2a/server/agent_execution/active_task.py rejects the first status update.

PR #2170 migrated the executor surface to v1 but missed this contract. The
synthetic E2E gate caught it on every staging run since (~1 week silent
fail) with:

    {"jsonrpc":"2.0","id":"e2e-msg-1","error":{"code":-32603,
     "message":"Agent should enqueue Task before TaskStatusUpdateEvent
     event","data":null}}

The fix enqueues a Task(state=SUBMITTED) before the TaskUpdater is
constructed, gated on `context.current_task is None` so continuation
messages don't double-enqueue (which the SDK logs about but doesn't reject).

Tests:

  - test_first_event_is_task_for_new_request — pins the new-request path:
    first enqueue must be a Task with the expected id/context_id
  - test_no_task_enqueue_on_continuation — pins the continuation path: when
    context.current_task is set, the executor must NOT re-enqueue Task
  - conftest: stub Task / TaskStatus / TaskState in the mocked a2a.types
    module so the import inside the executor resolves under unit tests

google-adk adapter does not have this bug — its execute() only emits
Message events, not TaskStatusUpdateEvent. Its cancel() does emit one,
but cancel is rarely-invoked and out of scope for this fix.

Live verification path: this PR's merge → publish-runtime cascade → next
synth-E2E firing should go green at step "8/11 Sending A2A message to
parent — expecting agent response".
2026-05-03 03:15:54 -07:00
Hongming Wang
e4893f5a9a
Merge pull request #2552 from Molecule-AI/feat/wire-event-log-into-adapter-base
feat(workspace): wire EventLog into adapter base (#119 PR-3b)
2026-05-03 08:39:34 +00:00
Hongming Wang
d58185b8a8 chore(workspace): remove dead defensive block in load_skills AST gate
Self-review of PR #2553 caught an unreachable defensive block at
test_load_skills_call_sites.py:99-103: the inner check guarded
`call.func.__class__.__name__ == "Name"` from a FunctionDef, but
`_find_load_skills_calls` already filters its return type to
`ast.Call` — `FunctionDef` cannot reach that loop body. The block
was a no-op `pass` with a misleading comment.

Removing keeps the gate behaviorally identical; tests still pass.

Same five-axis review pass that turned this up also approved the
substantive logic of #2553, so no behavior change here.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 01:30:05 -07:00
Hongming Wang
f8b40d8d73 docs(skills): document SKILL.md runtime field + AST coverage gate (#119 PR-4)
Closes the documentation + audit gap for declarative skill-compat. The
plumbing has been live since PR #117 (RuntimeCapabilities) and
skill_loader's `_normalize_runtime_field` has been emitting filter
decisions for weeks, but:
- No public doc explained the `runtime` frontmatter field, so skill
  authors didn't know how to opt in / opt out.
- No structural gate ensured every load_skills() call site threads
  current_runtime — a future caller forgetting the kwarg silently
  force-loads runtime-incompatible skills (no AttributeError, just a
  delayed crash on first tool invocation).

Two changes:

1. docs/agent-runtime/skills.md
   - Adds `runtime`, `tags`, `examples` to the Frontmatter Fields table.
   - Adds a Runtime Compatibility section with example, accepted shapes
     (universal default, list, string sugar), and the "logged + omitted,
     not crashed" failure mode. Notes that match values come from each
     adapter's name() (the same string in config.yaml's runtime: field).

2. workspace/tests/test_load_skills_call_sites.py
   - Static AST gate: walks every workspace/*.py (excluding tests),
     finds load_skills(...) Call nodes, fails if any lacks
     current_runtime= as a keyword.
   - Defense-in-depth `test_known_call_sites_present` — pins that the
     scan actually sees the two known callers (adapter_base,
     skill_loader.watcher) so a refactor that moves them is loud.
   - Sanity-checked the matcher against a synthetic violating module.

Same-shape pattern as PR #2358 (tenant_resources audit-coverage AST
gate, #150) — pin the contract structurally, not just behaviorally.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 01:22:34 -07:00
Hongming Wang
71e7a6ffee feat(workspace): wire EventLog into adapter base (#119 PR-3b)
Adds adapter.event_log property+setter on BaseAdapter so adapters can
emit structured events (tool dispatch, skill load, executor errors)
without coupling to the chosen backend. Default is a shared no-op
DisabledEventLog; main.py overrides at boot from the
observability.event_log config block (PR-2 schema).

The shape is intentionally additive:
- Property is invisible to the BaseAdapter signature snapshot drift
  gate (the helper walks vars(cls) for callables only — properties
  are not callable). Verified with a regression test in the new
  test_adapter_base_event_log.py.
- Existing adapters continue to work unchanged. Template repos that
  never call self.event_log get the no-op for free.
- Setter accepts any EventLogBackend, so swapping memory↔disabled
  at runtime (or to a future Redis backend) requires no adapter
  code change.

Sequels:
- PR-3c: emit events from claude-code/hermes adapters at the
  natural points (tool dispatch, skill load).
- PR-4: skill-compat audit + SKILL.md frontmatter docs.
- Platform-side /workspaces/:id/activity endpoint reads the buffer.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 01:18:19 -07:00
Hongming Wang
efa68a26b1 feat(workspace): wire observability config into heartbeat + uvicorn (#119 PR-3a)
Replaces the hard-coded HEARTBEAT_INTERVAL=30 in heartbeat.py and
log_level="info" in main.py with values from
ObservabilityConfig (#119 PR-1, schema landed in PR #2538).

Concrete plumbing:

  - heartbeat.HeartbeatLoop accepts an `interval_seconds=` keyword
    arg. Defaults to the legacy module constant so 2-arg callers
    (existing tests, any downstream code that hasn't been updated)
    keep their existing 30s behavior.
  - main.py constructs HeartbeatLoop with
    config.observability.heartbeat_interval_seconds — the value the
    config parser already clamped to [5, 300].
  - main.py's uvicorn.Config takes log_level from
    config.observability.log_level (lowercased — uvicorn's convention
    differs from Python logging's) with LOG_LEVEL env still winning
    as an ops-side debugging override.

Adapter EventLog wiring deferred to PR-3b (#208 follow-up) — touches
adapter_base interface + needs careful design, kept separate to keep
this PR small + reviewable.

Tests:
  - test_heartbeat.py: 3 new tests pin default interval, explicit
    override, and the [5, 300] band that the constructor accepts
    without re-clamping (clamping is the parser's job).
  - All 88 tests in test_heartbeat.py + test_config.py pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 01:01:57 -07:00
Hongming Wang
0fc2531250 feat(workspace): event_log module + EventLogConfig (#119 PR-2)
Adds workspace/event_log.py with an in-memory EventLog backend and a
disabled no-op variant, plus EventLogConfig nested in
ObservabilityConfig (backend / ttl_seconds / max_entries).

The event log is the append-and-query buffer that the canvas Activity
tab and platform `/activity` endpoint will read in PR-3 of the #119
stack. Two backends ship in this PR:

  - InMemoryEventLog: bounded ring buffer with TTL eviction, monotonic
    ids that survive eviction so cursors don't break, thread-safe for
    concurrent appends from heartbeat + main loop + A2A executor.
  - DisabledEventLog: no-op for `backend: disabled` — opts the
    workspace out without crashing callers that propagate event ids.

Schema-only PR — no consumers wired yet. Wiring lands in PR-3.

Test coverage:
  - 34 new test_event_log.py tests (100% line coverage on event_log.py)
  - 9 new test_config.py tests for EventLogConfig parsing
  - Concurrency stress with 8 threads × 200 appends — verifies unique
    monotonic ids under contention
  - TTL + max_entries eviction with injected clock (no time.sleep)
  - Disabled backend contract pinned

Closes #207.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 00:17:12 -07:00
Hongming Wang
fd4b4e0723 test: pin null-required_env tolerance + drop unused MINIMAX env clear
Two self-review nits on the prior commit:
- Add test_per_model_required_env_null_treated_as_empty_no_auth — pins
  parser tolerance for YAML 'required_env:' (deserializes to None). The
  'or []' fallback handles it, but the behavior wasn't asserted, and a
  template author who writes 'required_env:' with no value (common YAML
  mistake) needs the no-auth path, not a confusing TypeError.
- Drop the MINIMAX_API_KEY delenv from the explicit-empty test — there's
  no MINIMAX in any required_env list of that scenario, so the cleanup
  was dead noise.

78/78 tests pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-02 21:56:40 -07:00
Hongming Wang
3e5955f04f fix(runtime): explicit empty per-model required_env means "no auth"
Two follow-ups from the independent review of #2538.

preflight.py
============
Today: `if per_model_env: required_env = list(per_model_env)` falls
through on `[]`, so a template entry that says "this model needs no
auth" (`required_env: []` — Ollama, llamafile, self-hosted OpenAI-
compat, anything where the SDK doesn't surface a key) is silently
overridden by the top-level fallback list. The template author cannot
express a zero-auth model without lying about its env requirements.

Fix: key off `"required_env" in entry` (key presence, not truthiness).
Missing key still falls back to top-level — that path is unchanged
and preserves "many templates list name/description per model without
enumerating env vars when auth is identical across the family". Empty
list now wins outright. Comment updated to call out the distinction.

test_preflight.py
=================
Renamed `test_per_model_match_with_no_required_env_falls_back_to_top_level`
to `…_no_required_env_KEY_…` and tightened its docstring to reflect
that it's the missing-KEY case only. Added new
`test_per_model_explicit_empty_required_env_means_no_auth` to pin the
new explicit-empty semantic.

test_config.py
==============
New `test_runtime_config_model_env_wins_over_explicit_yaml`. Pins the
intentional precedence inversion shipped in #2538 with both
MODEL_PROVIDER and runtime_config.model in YAML set — MODEL_PROVIDER
wins. Without this pin a future refactor could quietly restore the
old YAML-wins order and re-introduce Bug B.

77/77 targeted tests pass locally.

Closes #250 (review follow-up). Builds on merged #2538.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-02 21:51:01 -07:00
Hongming Wang
97ebd1910a fix(runtime): canvas-picked model wins universally + per-model required_env
Two surgical edits to the molecule-runtime workspace package that fix
Bug B (canvas-picked model silently dropped for templated workspaces)
and Bug D (preflight rejects valid auth for non-default models),
universally for every adapter.

Bug B — canvas-picked model dropped (config.py)
================================================
Before: load_config resolved runtime_config.model as
  runtime_raw.get("model") or model
which means a template's `runtime_config.model: sonnet` always wins
over the canvas-picked MODEL_PROVIDER env var. Surfaced 2026-05-02
during MiniMax E2E — picking MiniMax-M2.7 in canvas, server plumbed
MODEL_PROVIDER=MiniMax-M2.7 correctly, but the workspace booted with
sonnet because the template's verbatim config.yaml won.

After:
  os.environ.get("MODEL_PROVIDER") or runtime_raw.get("model") or model

Centralising in load_config means EVERY adapter (claude-code, hermes,
codex, langgraph, future ones) gets canvas-picked-model passthrough
for free — no per-adapter env-reading code required.

Bug D — preflight per-model required_env (preflight.py)
========================================================
Before: preflight read the top-level required_env list, which
declares the auth needed by the *default* model. A template like
claude-code-default declares CLAUDE_CODE_OAUTH_TOKEN at the top
level. When a user picked MiniMax instead and only set
MINIMAX_API_KEY, preflight rejected the workspace with
"missing CLAUDE_CODE_OAUTH_TOKEN" and the workspace crash-looped
despite the user having satisfied the picked model's actual auth.

After: when runtime_config.models[] declares per-entry required_env,
preflight matches the picked model id (case-insensitive) and uses
that entry's required_env outright instead of the top-level list.
REPLACE semantics, not union — different models have *different*
auth paths (OAuth vs API key vs third-party provider key); unioning
would re-introduce the very crash-loop this fix closes.

Surface enabling both fixes (config.py)
========================================
RuntimeConfig now carries `models: list[dict]` so the canvas Model
dropdown source flows through to preflight without forcing the
parser schema to grow. Malformed entries are silently dropped to
match the rest of the lenient parser.

Tests
=====
- workspace/tests/test_preflight.py: 9 new tests covering the
  per-model lookup (case-insensitive, REPLACE not union, fallback
  to top-level when no models[] or no match, multi-entry, malformed
  entries dropped, etc.)
- workspace/tests/test_config.py: existing 48 pass; field
  initialisation already covered by parser tests.
- All 75 targeted tests pass locally; CI runs the full suite
  including coverage gate.

Closes part of #246. Sibling PR opens against
molecule-ai-workspace-template-claude-code for per-template
defensive fixes + boot debug logging.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-02 21:36:24 -07:00
Hongming Wang
02a8841402 fix(a2a): send v1 file Part shape; tolerate v1 server-side
Image-only chats surface "Error: message contained no text content"
because canvas posts v0 `{kind:"file", file:{uri,name,mimeType}}` shapes
that the workspace runtime's a2a-sdk v1 protobuf parser silently drops:
v1 `Part` has fields `[text, raw, url, data, metadata, filename,
media_type]` and `ignore_unknown_fields=True` discards `kind`+`file`,
producing a fully-empty Part. With no text and no extracted file
attachments, the executor's "no text content" guard fires.

Three coordinated changes close the gap:

1. canvas/ChatTab.tsx — outbound file parts now carry the v1 flat
   shape `{url, filename, mediaType}` so the v1 protobuf parser
   populates Part fields instead of dropping them.
2. workspace/executor_helpers.py — extract_attached_files learns the
   v1 detection branch (non-empty `part.url` + `filename` +
   `media_type`) alongside the existing v0 RootModel and flat-file
   shapes. Defends every runtime that mounts the OSS wheel against
   the same drop, including any pre-fix client still on the wire.
3. canvas/message-parser.ts — extractFilesFromTask tolerates the v1
   shape on incoming agent responses too, so file chips render in
   chat history regardless of which Part shape the runtime emits.

Test pins:
- workspace/tests/test_executor_helpers.py:
  + v1 protobuf shape extraction
  + empty-Part defense (v0→v1 silent-drop fall-through returns [])
- canvas message-parser test:
  + v1 protobuf flat parts
  + filename fallback to URL basename for v1
2026-05-02 00:58:05 -07:00
Hongming Wang
6f0e914521
Merge pull request #2479 from Molecule-AI/fix/molecule-mcp-non-pipe-stdout
fix(mcp): friendly fail-fast when stdio isn't pipe-compatible
2026-05-02 02:20:51 +00:00
Hongming Wang
f6a48d593e test: standardise on from a2a_mcp_server import ... in TestStdioPipeAssertion
github-code-quality bot flagged 4 instances of `import a2a_mcp_server` in
the new TestStdioPipeAssertion class — every other test in the file uses
the `from a2a_mcp_server import ...` per-test pattern, so this is a real
inconsistency.

Switching the new tests to match. No behavior change; resolves the
4 unresolved review threads blocking the merge queue.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 19:17:55 -07:00
Hongming Wang
1181699482
Merge pull request #2481 from Molecule-AI/fix/channel-peer-id-trust-boundary
fix(channel): validate peer_id at envelope build — close path-traversal foothold
2026-05-02 01:46:49 +00:00
Hongming Wang
0b979aed78 fix(channel): validate peer_id at envelope build — close path-traversal foothold
Two trust-boundary leaks surfaced in code review of the channel-envelope
enrichment work:

1. _agent_card_url_for(peer_id) interpolated raw input into
   ${PLATFORM_URL}/registry/discover/<peer_id> with no UUID guard. An
   upstream row with peer_id=`../../foo` produced an agent-visible URL
   pointing at a sibling registry path. Same trust-boundary rationale
   discover_peer's docstring already calls out: "never interpolate
   path-traversal characters into the URL". Now gated by _validate_peer_id;
   returns "" on validation failure.

2. _build_channel_notification echoed raw peer_id back into
   meta["peer_id"], which on the push path renders inside the agent's
   <channel peer_id="..." kind="..."> XML-attribute context. Attacker
   bytes (control chars, embedded quotes) would land in agent-rendered
   text wired into the next conversation turn. Now canonicalised through
   _validate_peer_id before any meta write; on validation failure we
   set "" rather than reflecting the raw bytes.

Defense-in-depth — both layers gate independently. Mutation-verified by
stashing both prod-side files and confirming both regression tests fail.

Tests:
- test_envelope_enrichment_invalid_peer_id_skips_lookup: updated to
  pin the safe behavior (peer_id="" + agent_card_url absent), not the
  prior leak shape.
- test_envelope_enrichment_strips_path_traversal_peer_id: NEW. Hard
  regression for peer_id="../../foo" — pins both the URL-builder and
  the meta echo against this specific exploit shape.
- Two existing tests updated to use UUID-shape placeholders instead
  of "ws-peer-uuid" / "peer-ws-uuid" since those non-UUIDs now correctly
  get stripped by the validator.

Resolves the Required-grade finding from the multi-axis review on PR #2471.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 18:43:49 -07:00
Hongming Wang
88b156a3bc
Merge pull request #2480 from Molecule-AI/chore/runtime-wedge-dedup-fixture
chore(tests): drop redundant local _reset fixture from test_runtime_wedge
2026-05-02 01:33:31 +00:00
Hongming Wang
8838f99ed3 chore(tests): drop redundant local _reset fixture from test_runtime_wedge
PR #2475 promoted runtime_wedge reset to an autouse conftest fixture in
workspace/tests/conftest.py covering every test in this directory. The
local @pytest.fixture(autouse=True) _reset in test_runtime_wedge.py
became dead-but-harmless (idempotent reset is idempotent — both fixtures
ran on every test, double-resetting). Remove the local copy so future
maintainers don't have to keep two definitions in sync.

Caught during a deeper /code-review-and-quality pass on the #2475
follow-ups — the original PR landed the conftest fixture but missed
the dedup of the now-redundant in-file fixture.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 18:31:21 -07:00
Hongming Wang
9bbf32b526
Merge pull request #2471 from Molecule-AI/feat/channel-envelope-enrichment
feat(a2a-mcp): enrich channel envelope with peer name/role/agent_card_url
2026-05-02 01:31:15 +00:00
Hongming Wang
885eff2350 test: drop unused _OTHER_PEER constant
github-code-quality bot flagged it as an unused module-level global —
correctly. The earlier draft of the negative-cache test was going to
exercise two distinct peer IDs hitting the registry concurrently, but
the test was simplified to a single-peer flow before merge and the
constant lost its consumer.

Resolves the only blocking review thread on PR #2471.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 18:28:24 -07:00
Hongming Wang
82beb98fff
Merge pull request #2474 from Molecule-AI/feat/chat-history-mcp-tool
feat(a2a-mcp): add chat_history tool for prior turns with a peer
2026-05-02 01:27:38 +00:00
Hongming Wang
afc01d6995 fix(mcp): friendly fail-fast when stdio isn't pipe-compatible
When molecule-mcp is launched with stdin or stdout redirected to a
regular file (molecule-mcp > out.txt, ad-hoc CI smoke-tests, local
debugging), asyncio.connect_read_pipe / connect_write_pipe later raise
ValueError: Pipe transport is only for pipes, sockets and character
devices — surfaced to the operator as a confusing traceback with no
hint about what to do.

Add _assert_stdio_is_pipe_compatible() to detect the same constraint
synchronously before the event loop starts, exit cleanly with code 2,
and print a stderr message that names:
  - which stream failed (stdin vs stdout)
  - the asyncio transport requirement
  - the two common causes (>file, <file) and a working alternative
    (molecule-mcp 2>&1 | tee out.txt)

Wired into cli_main() (the synchronous wrapper around asyncio.run(main()))
so wheel-smoke + the production launch path both go through the guard
without changing the async stdio loop body. Closed/stale-fd case also
handled — os.fstat OSError exits 2 with the same guidance instead of
escaping.

Tests: 4 new in TestStdioPipeAssertion — pipe-pair happy path,
regular-file stdout (the bug condition), regular-file stdin (symmetric
case), and closed-fd. Mutation-verified — all 4 fail without the prod
helper. 37/37 in test_a2a_mcp_server.py.

Closes Molecule-AI/molecule-ai-workspace-runtime#61.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 18:26:24 -07:00
Hongming Wang
e6eda38318 fix(a2a-client): negative-cache registry failures in enrich_peer_metadata
Self-review on PR #2471: failure outcomes (4xx/5xx/non-JSON/network
exception) weren't writing to _peer_metadata, so a peer with a flaky
or missing registry record re-fired the 2s-bounded GET on EVERY
push. The cache became a no-op for the exact failure scenarios it
most needs to defend against, and the poller thread stalled 2s per
push for that peer until the registry came back.

Cache the failure outcome as `(now, None)` so the TTL window
suppresses re-fetch. Two new tests pin the behaviour for both
HTTP failures (5xx) and transport exceptions (httpx.ConnectError).

Type signature widens to `dict | None` on the value tuple's second
slot to match the new sentinel; readers already handle `None` as
"no enrichment available" — that's the documented graceful-degrade
contract — so no caller change needed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 18:16:35 -07:00
Hongming Wang
46bc63e373 chore(smoke): runtime_wedge follow-ups from PR #2473 review
Three review nits from PR #2473:

1. Narrow `_check_runtime_wedge` import catch to (ImportError,
   ModuleNotFoundError). The bare `except Exception:` would have
   masked an `AttributeError`/`TypeError` from a runtime_wedge API
   rename — silently degrading the smoke gate to "no wedge info" with
   no log line. The `runtime_wedge_signature.json` snapshot test
   (task #169) carries the API-drift load instead.

2. Drop the unreachable `or "<unspecified>"` fallback. `wedge_reason()`
   only returns "" when not wedged, but the call is guarded by
   `is_wedged()` being True and `mark_wedged` requires a non-None
   reason. The defensive arm couldn't fire.

3. Promote `reset_runtime_wedge` from a per-file fixture in
   test_smoke_mode.py to an autouse fixture in
   workspace/tests/conftest.py. Heartbeat tests or future adapter
   tests that call `mark_wedged` without cleanup would otherwise leak
   a sticky wedge into smoke tests later in the same pytest process —
   smoke tests would fail-via-leak instead of asserting their actual
   contract. Two-sided reset survives early test failures.

Also: `test_check_runtime_wedge_returns_none_when_module_missing`
now `monkeypatch.delitem(sys.modules, "runtime_wedge")` before
patching `__import__`, so the test re-exercises the import path
instead of resolving from the module cache (the test was passing
today by luck — it would still pass even if the catch arm were
deleted, because the cached module's `is_wedged` returned False).

Tests: 28 still pass in test_smoke_mode.py, 57 across smoke + wedge +
heartbeat. Regression-injection-checked: catch tightening doesn't
regress the existing wedge tests.
2026-05-01 18:01:51 -07:00
Hongming Wang
09e99a09c6 feat(a2a-mcp): add chat_history tool for prior turns with a peer
When a peer_agent push lands and the agent needs context from prior
turns with that workspace ("what task did this peer assign me last
hour?", "what did I tell them?"), the only options today are
re-deriving from memory (lossy) or scrolling activity_logs in the
canvas (no agent-facing tool). Surface the platform's existing
audit log directly via a new MCP tool so agents can read both sides
of an A2A conversation in chronological order.

Implementation:
- a2a_tools.py: new tool_chat_history(peer_id, limit=20, before_ts="")
  hits /workspaces/<self>/activity?peer_id=X&limit=N (the new server
  filter from molecule-core#2472). Reverses the DESC response into
  chronological order so the agent reads top-down. Graceful error
  envelope on validation/network/non-200 — never crashes the MCP
  server, agent can branch on Error: prefix.
- platform_tools/registry.py: ToolSpec wired into the A2A section so
  the rendered system-prompt block automatically includes it. Same
  pattern as the existing inbox_peek/inbox_pop/wait_for_message.
- a2a_mcp_server.py: dispatch in handle_tool_call.
- executor_helpers.py: _CLI_A2A_COMMAND_KEYWORDS gets a None entry
  (CLI runtimes don't expose chat history today; flip to a keyword
  when a2a_cli grows a `history` subcommand).
- snapshots/a2a_instructions_mcp.txt regenerated.

Tests: 10 new branches in TestChatHistory (validation / param
forwarding / limit cap / before_ts pass-through / DESC→chronological
reorder / 400 verbatim / 500 generic / network exc / non-list resp).
Mutation-verified: reverting a2a_tools.py fails 10/10. Full test
suite remains green at 1516 passed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 17:54:23 -07:00
Hongming Wang
b39dc62de6
Merge pull request #2473 from Molecule-AI/feat/universal-turn-smoke-runtime-wedge
feat(smoke): consult runtime_wedge after execute() to catch SDK init wedges
2026-05-02 00:52:31 +00:00
Hongming Wang
103ac09aeb docs(a2a-mcp): list new envelope attrs in initialize instructions
The agent learns about <channel> tag attributes ONLY from the
instructions string returned by initialize. Without this update the
wheel ships peer_name / peer_role / agent_card_url on the wire but
no agent ever uses them — they get printed inline in the push tag,
the agent doesn't know they're there, and the UX gain from the
enrichment is lost.

Update _build_channel_instructions to:
- List the new attrs in the <channel> tag template under PUSH PATH
- Add per-attribute semantics (when present, what to do with them,
  what \"absent\" means — graceful-degrade vs bug)
- Point at the discover endpoint for agent_card_url so the agent
  treats it as a follow-on URL not the body of the message

Tests: structural pin asserting all three attr names appear in the
instructions AND the per-field semantics phrases (\"registry
resolved\", \"discover endpoint\") so a future copy-edit that
shortens the prose can't silently drop the agent guidance.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 17:49:40 -07:00
Hongming Wang
59f0a449bd feat(smoke): consult runtime_wedge after execute() to catch SDK init wedges
Timeout-as-PASS in run_executor_smoke missed the PR-25-class
regression: claude-agent-sdk takes 60s to time out on a malformed
argv, our outer wait_for fires at 5s default and reports "imports
healthy, hit a network boundary." A broken image then ships to GHCR.

Universal fix uses the existing runtime_wedge module (already
documented as the cross-cutting wedge holder, already read by
heartbeat). Adapters opt-in by calling runtime_wedge.mark_wedged()
from their executor's wedge catch arm; the smoke now consults
runtime_wedge.is_wedged() at the end of every result path and
upgrades a provisional PASS to FAIL when the flag is set. Non-opt-in
adapters keep working as before — the check is additive.

CI uses MOLECULE_SMOKE_TIMEOUT_SECS=90 to outlast the SDK's 60s
initialize() handshake so the wedge marks before our outer wait_for
fires. Module + helper docstrings call out the calibration so a
future contributor doesn't lower it without thinking through what
that wins back vs. what it loses.

Tests: 7 new cases pinning the wedge-aware paths — mark+raise (PR-25
shape), mark+block (still-running execute that wait_for cuts short),
clean+clean (additive contract), import-resilience (fail-open when
runtime_wedge unimportable). Regression-injection-checked: silencing
the new check fails both wedge-shape tests at unit-test time.
2026-05-01 17:46:43 -07:00
Hongming Wang
0fec3d6fe4 fix(test): anchor envelope-enrichment TTL test to monotonic baseline
Setting fetched_at = 0.0 assumed wall-clock semantics, but
time.monotonic() returns process uptime — when this test ran
early in the pytest run, current was <300s and the entry was
treated as fresh, silently skipping the re-fetch the assertion
expects. Anchor to time.monotonic() - TTL - 60 so the entry is
unambiguously past the freshness window regardless of when
in the run the test fires.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 17:45:05 -07:00
Hongming Wang
050aa33fc1 feat(a2a-mcp): enrich channel envelope with peer name/role/agent_card_url
The bare envelope only carried `peer_id` for peer_agent inbound, so a
receiving agent had to round-trip to /registry to find out who's
talking. Surface the sender's display name, role, and an agent-card
URL alongside the routing fields so the agent can render
"ops-agent (sre): ping" in one shot without an extra lookup.

a2a_client.py:
- Add _peer_metadata cache `dict[peer_id → (fetched_at, record)]`
- Add enrich_peer_metadata(peer_id) — sync, hits cache or registry
  with a tight 2s timeout, returns None on validation/network/non-200
  so callers can degrade gracefully
- TTL = 5 min so a busy multi-peer chat doesn't hit registry on every
  push, but role/name renames propagate within a session
- Add _agent_card_url_for(peer_id) — deterministic from peer_id alone

a2a_mcp_server.py:
- _build_channel_notification calls enrich_peer_metadata when peer_id
  is non-empty; meta carries peer_name + peer_role + agent_card_url
  alongside the existing routing fields
- agent_card_url surfaces unconditionally (constructable from peer_id);
  peer_name/role only when registry lookup succeeds — never blocks the
  push on a registry stall

Tests: 6 new branches (canvas_user no enrichment / cache hit no GET /
cache miss fetches once / registry-fail graceful degrade / TTL expiry
re-fetches / invalid peer_id skips lookup). Mutation-verified: 6/6
fail without prod code, 39/39 pass with.

Tracks the broader RFC at #2469 (workspace-server activity_type rename
to break the echo loop). Independent of PR #2470 — this is the
metadata-enrichment half of the same UX improvement.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 17:40:09 -07:00
Hongming Wang
2d8c45989a fix(inbox): skip self-notify rows in poller to break echo loop
The workspace-server's `/notify` handler writes the agent's own
send_message_to_user POSTs to activity_logs as activity_type=
'a2a_receive', method='notify', source_id=NULL so the canvas
chat-history loader can restore those bubbles after a page reload.
The activity API exposes the row to /workspaces/:id/activity?
type=a2a_receive, so the inbox poller picks it up and pushes the
agent's own outbound back as an inbound `← molecule: Agent
message: ...` — confirmed live 2026-05-01.

Add `_is_self_notify_row` predicate matched on (method='notify' AND
no source_id) and call it from `_poll_once` before enqueue. The
predicate combines BOTH discriminators so a future caller using
method='notify' with a real peer_id still passes through. Cursor
advances past skipped rows so we don't re-poll the same self-notify
on every iteration.

Belt-and-braces: long-term fix lives in workspace-server (rename
the misclassified activity_type to 'agent_outbound' — RFC at
#2469). This guard stays regardless because it only excludes rows
we never want.

Tests: 7 new — predicate true/false matrix + integrated _poll_once
behavior (skip, cursor advance, notification suppression).
Mutation-verified: reverting inbox.py to the prior shape fails 7/7;
applied state passes 48/48.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 17:35:49 -07:00
Hongming Wang
f80e054a95
Merge pull request #2466 from Molecule-AI/feat/universal-push-via-instructions
feat(mcp): universal inbound delivery — instructions-driven polling + optional push
2026-05-01 23:13:05 +00:00
Hongming Wang
dbd086c7ad test(mcp): comment empty except in bridge test cleanup
Address github-code-quality review on PR #2465: explain why the
OSError swallow in pipe teardown is intentional (best-effort
cleanup of a possibly-already-closed fd).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 16:07:33 -07:00
Hongming Wang
ea206043d8 feat(mcp): universal inbound delivery — instructions-driven polling + optional push
Why this exists
---------------
Live evidence on 2026-05-01 caught a regression latent in #46's
"push-feel inbound" closure: standard `claude` launches without
`--dangerously-load-development-channels` silently drop our
`notifications/claude/channel` emissions, so canvas/peer messages sat
in the wheel inbox and never reached the agent loop until manual
`inbox_peek`. The flag is research-preview-only; non-Claude-Code MCP
clients (Cursor, Cline, OpenCode, hermes-agent, codex) never receive
the notification at all because the method namespace is Claude-
specific. Push-only delivery shipped as the universal contract is
not actually universal.

What this changes
-----------------
Adds a poll path that works on every spec-compliant MCP client. The
`initialize` `instructions` field — read by every client and surfaced
to the agent's system prompt automatically — now tells the agent to
call `wait_for_message(timeout_secs=N)` at the start of every turn.
Push remains as the strictly-better delivery for hosts that opt in
(Claude Code with the dev flag or a future allowlist entry), but is
no longer load-bearing.

Both paths converge on the same `inbox_pop` ack so duplicate-delivery
on a push+poll race is impossible: whoever surfaces the message to
the agent first pops it, the other side returns empty.

Operator knob
-------------
`MOLECULE_MCP_POLL_TIMEOUT_SECS` controls per-turn poll blocking
(default 2s). 0 disables polling for push-only Claude Code with the
dev flag. Above 60 clamps to 60 — protects against an accidental
five-minute stall per turn. Resolved fresh on every `initialize` so
a relaunch with new env is enough; no wheel rebuild required.

Tests
-----
- structural pins on the new instructions: `wait_for_message` +
  `timeout_secs` named, both PUSH PATH / POLL PATH labels present
- env-resolution: default fallback, garbage fallback, negative
  fallback, 60s clamp
- operator override: `MOLECULE_MCP_POLL_TIMEOUT_SECS=7` reaches the
  agent's instructions string
- timeout=0 toggles to push-only-mode messaging (no
  wait_for_message call asked of the agent)
- existing pins on push path, reply tools, prompt-injection defense,
  meta attributes — all preserved

Successor to #46. Closure milestone for this PR (per
feedback_close_on_user_visible_not_merge.md): launched `claude`
against the published wheel, sent a canvas message, observed the
agent surfaces the message inline at the start of its next turn
without me running `inbox_peek` — verified live before declaring done.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 15:32:57 -07:00
Hongming Wang
a3a496bced test(mcp): pin inbox→stdout bridge end-to-end with three failure-mode tests
Closes the dynamic-coverage gap on the `notifications/claude/channel`
push-UX bridge — until now we had static pins on the wire shape
(_build_channel_notification) and the initialize handshake, but the
threading + asyncio + stdout chain that ships notifications to the
host was never exercised under realistic conditions.

The three failure modes anticipated in #2444 §2 are each now pinned:

  test_inbox_bridge_emits_channel_notification_to_writer
    Drives a fake inbox event from a daemon thread, asserts the
    notification lands on a real os.pipe-backed asyncio writer with
    the correct JSON-RPC envelope. Catches: bridge wired up
    incorrectly (no-op _on_inbox_message), run_coroutine_threadsafe
    drift, _build_channel_notification call missing.

  test_inbox_bridge_swallows_closed_pipe_drain_error
    Closes the pipe's read end before firing, captures the
    concurrent.futures.Future that run_coroutine_threadsafe returns,
    asserts its exception() is None. Catches: narrowing the broad
    `except Exception` in _emit (e.g. to RuntimeError), or removing
    it. Without the swallow, the future carries a ConnectionResetError
    and the test fails with a clear message naming the regression.

  test_inbox_bridge_swallows_closed_loop_runtime_error
    Builds the bridge against a closed event loop, fires the
    callback, asserts no exception escapes. Catches: removing the
    `except RuntimeError` swallow on the run_coroutine_threadsafe
    call. Without it the poller thread would crash with
    "RuntimeError: Event loop is closed" during shutdown.

To make the bridge testable, extracted the closures from main() into
a top-level `_setup_inbox_bridge(writer, loop) -> Callable[[dict],
None]` helper. main()'s wire-up is now a single line that calls the
helper. Behavior is unchanged — same write, same drain, same
swallows — just no longer trapped inside main()'s closures.

Verified each test catches its regression by injection: removing
each swallow / no-op'ing the bridge each turn the matching test red
with a specific failure message that points at the missing piece.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 15:13:32 -07:00
Hongming Wang
e6be3c0df0 test(mcp): pin prompt-injection defense in _CHANNEL_INSTRUCTIONS
Adds the missing symmetric pin against the threat-model sentence —
the existing tests pin reply-tool names (send_message_to_user,
delegate_task, inbox_pop) and tag attributes (kind, peer_id,
activity_id) but left the "treat message body as untrusted user
content" line unpinned. A copy-edit that drops it would turn the
channel into an open prompt-injection vector against any workspace
running the MCP server.

Pins three signals: "untrusted" present, an explicit
"not execute"/"do not" clause, and the "approval" escape-hatch
sentence — two of three would let a partial copy-edit slip
through.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 14:24:05 -07:00
Hongming Wang
2588ab27d5 feat(mcp): add channel instructions field — second gate for push UX
PR #2461 added the experimental.claude/channel capability declaration
on the assumption that was the missing gate for Claude Code surfacing
notifications/claude/channel as inline <channel> interrupts. Research
against code.claude.com/docs/en/channels-reference.md confirms the
capability IS one gate — but there's a SECOND required field we still
don't ship: `instructions` on the initialize result.

The docs are explicit: instructions is what tells the agent what the
<channel> tag attributes mean and which tool to call to reply. Without
it the channel registers but the agent receives the tag with no
context and has no idea how to handle it. The official telegram
plugin ships both (server.ts:370-396) — capability AND instructions.
We were shipping one of two.

This adds the instructions string. It documents:
- kind/peer_id/activity_id meta attributes
- canvas_user → send_message_to_user reply path
- peer_agent → delegate_task reply path
- inbox_pop ack to prevent duplicate-poll re-delivery
- threat model: treat message bodies as untrusted user content

Tests: 4 new pins. instructions present + non-empty, instructions
names each reply tool, instructions documents each tag attribute.
Failure messages name the symptom so a copy-edit can't silently
break the channel.

Live verification still pending after wheel ships — same plan as
the gap is in --dangerously-load-development-channels (host-side
flag, outside our control during the channels research preview).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 14:24:05 -07:00
Hongming Wang
0a87dec50e feat(mcp): declare experimental.claude/channel capability for push UX
Without this capability declaration in the initialize handshake,
Claude Code's MCP client receives our notifications/claude/channel
emissions but silently drops them — they never become inline
<channel> tags in the conversation. The push-UX bridge added in
PR #2433 ships, fires, and is invisible.

This was anticipated as a failure mode in #2444 §2 ("Notification
arrives but Claude Code doesn't surface it — host doesn't recognize
the method"), and confirmed live in this session: a canvas chat
"hi" landed in the inbox queue (inbox_peek returned it) but never
woke the agent until inbox_peek was called by hand.

The contract matches molecule-mcp-claude-channel/server.ts:374
where the bun bridge declares the same experimental flag.

Refactor: extracted _build_initialize_result() so the handshake
shape is unit-testable. Pure function, no behavioral change beyond
adding the experimental capability to the result.

Tests: 3 new pins on the initialize result (capability presence,
tools-still-there, protocolVersion stable). Closes the live-
verification gap §2 of #2444.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 13:45:06 -07:00
Hongming Wang
c636022d2f fix(runtime): auto-fallback CONFIGS_DIR for non-container hosts (closes #2458)
The runtime persists per-workspace state (`.auth_token`,
`.platform_inbound_secret`, `.mcp_inbox_cursor`) under `/configs` —
the workspace-EC2 mount path. Inside a container that's writable,
agent-owned. Outside a container, `/configs` either doesn't exist or
isn't writable by an unprivileged user.

The default broke the external-runtime path (`pip install
molecule-ai-workspace-runtime` + `molecule-mcp` on a Mac/Linux
laptop). First heartbeat tries to persist `.platform_inbound_secret`
and crashes:

    [Errno 30] Read-only file system: '/configs'

The heartbeat thread logs and dies. Workspace flips offline within
a minute. Operator sees no actionable error.

Adds workspace/configs_dir.py — single resolution point with a tiered
fallback:

  1. CONFIGS_DIR env var, if set — explicit operator override
     (preserves existing tests + custom deployments verbatim).
  2. /configs — if it exists AND is writable. In-container default;
     unchanged behavior for every prod workspace.
  3. ~/.molecule-workspace — created with mode 0700 so per-file 0600
     perms aren't undermined by a world-readable parent.

Migrates the four readers (platform_auth, platform_inbound_auth,
mcp_cli, inbox) to call configs_dir.resolve() instead of
inlining `Path(os.environ.get("CONFIGS_DIR", "/configs"))`.

Existing tests that assert the old `/configs`-as-default contract
updated to assert the new contract: when CONFIGS_DIR is unset, path
resolves to a writable location — `/configs` if present, fallback
otherwise. Tests skip the fallback branch on hosts that DO have a
writable `/configs` (CI containers).

Verified the original repro is fixed: with no CONFIGS_DIR set on
macOS, configs_dir.resolve() returns ~/.molecule-workspace, the dir
exists, and writes succeed.

Test suite: 1454 passed, 3 skipped, 2 xfailed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 13:07:55 -07:00
Hongming Wang
2e8892ebc4 fix(workspace): surface errno + path on chat-upload mkdir failure
Production incident on hongming.moleculesai.app 2026-05-01T18:30Z —
fresh-tenant signup chat upload returned 500 with the body
{"error":"failed to prepare uploads dir"}. Diagnosis required SSM
access to the workspace stderr to recover errno + actual path.

The root-cause fix lives in claude-code template entrypoint
(molecule-ai-workspace-template-claude-code#23 — pre-create the
.molecule subtree as root before gosu drops to agent). This change
is the diagnostic improvement: when mkdir fails for any reason in
the future (EACCES, ENOSPC, EROFS, etc.), the response carries
the errno + offending path so the operator inspecting browser
devtools sees the real cause without needing SSM.

Backwards compatible — top-level "error" key is unchanged so
existing canvas / external alert rules continue to match. New
fields are additive: path, errno, detail.

Test pins the diagnostic shape so a future struct refactor can't
silently drop these fields.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 11:47:53 -07:00
Hongming Wang
c06c4c0f56
Merge pull request #2450 from Molecule-AI/feat/observability-config-schema
feat(config): observability block schema (#119 PR-1 of 4)
2026-05-01 05:20:11 +00:00
Hongming Wang
645c1862c4 feat(a2a-client): surface 410 Gone as 'removed' error so callers can re-onboard (#2429)
Follow-up A to PR #2449 — that PR taught the platform to return 410
Gone for status='removed' workspaces; this PR teaches get_workspace_info
to consume that signal.

Before: every non-200 collapsed into {"error": "not found"}, which
made the 2026-04-30 incident impossible to diagnose — the operator
KNEW the workspace_id existed (they'd just registered it), but the
runtime kept reporting "not found" for a deleted-but-not-purged row.

After: 410 produces a distinct {"error": "removed", "id", "removed_at",
"hint"} dict so callers (heartbeat-loop, channel bridge, dashboard
tools) can surface "your workspace was deleted, re-onboard" instead
of "not found". Falls back to a default hint if the platform body
isn't parseable so the actionable signal doesn't depend on body
shape parity.

Two new tests:
  - TestGetWorkspaceInfo.test_410_returns_removed_with_hint
  - TestGetWorkspaceInfo.test_410_with_unparseable_body_falls_back_to_default_hint

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 22:08:08 -07:00
Hongming Wang
59902bce83 feat(config): add observability block schema (#119 PR-1 of 4)
Hermes-style declarative block grouping cadence + verbosity knobs into
one place. Schema-only in this PR — wiring into heartbeat.py and main.py
lands in PR-3 of the #119 stack.

Two fields with live consumers waiting:
- heartbeat_interval_seconds (default 30, clamped to [5, 300])
  → heartbeat.py:134 currently has hard-coded HEARTBEAT_INTERVAL = 30
- log_level (default "INFO", uppercased at parse)
  → main.py:465 currently has hard-coded log_level="info"

Clamp band [5, 300] is intentional: sub-5s flooded the platform during
IR-2026-03-11; >5min lets crashed workspaces look healthy long enough
to mask failure. Coerce at parse so adapters and heartbeat.py can read
the value without re-validating.

Tests pin defaults, explicit YAML override, partial override, and
parametrized clamp behavior (10 cases including garbage strings + None).

Part of: task #119 (adopt hermes-style architecture)
Stack:  PR-1 schema → PR-2 event_log → PR-3 wire consumers → PR-4 skill compat
2026-04-30 21:58:45 -07:00
Hongming Wang
661eec2659 chore(smoke-mode): harden module-load + drop dead except clause
Two follow-ups from the #2275 Phase 1 self-review:

1. `_SMOKE_TIMEOUT_SECS = float(os.environ.get(...))` was evaluated at
   module load. main.py imports smoke_mode unconditionally — before
   the is_smoke_mode() check — so a malformed
   MOLECULE_SMOKE_TIMEOUT_SECS env value would SystemExit every
   workspace boot, not just smoke runs. Wrapped in try/except with a
   5.0 fallback. Probability of a typo'd env var hitting production
   is low (it's a CI-only knob), but the footgun is removed entirely.
   Regression test reloads the module under a malformed env value.

2. `_real_a2a_sdk_available()` caught (ImportError, AttributeError).
   `from X import Y` raises ImportError when Y is missing on X — never
   AttributeError. Dropped the unreachable branch.

No behavior change for the happy path.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 21:31:08 -07:00
Hongming Wang
aacaba024c feat(wheel-smoke): exercise executor.execute() to catch lazy imports (#2275)
The existing wheel-publish smoke (`wheel_smoke.py`) only IMPORTS
`molecule_runtime.main` at module scope. Lazy imports buried inside
`async def execute(...)` bodies (e.g. `from a2a.types import FilePart`)
NEVER evaluate at static-import time — they crash at first message
delivery in production.

The 2026-04-2x v0→v1 a2a-sdk migration shipped 5 such regressions in
templates that all looked fine at module-load smoke. This change adds
`smoke_mode.py` plus a `MOLECULE_SMOKE_MODE=1` short-circuit in
`main.py`: after `adapter.create_executor(...)`, the boot path invokes
`executor.execute(stub_ctx, stub_queue)` once with a 5s timeout
(`MOLECULE_SMOKE_TIMEOUT_SECS`). Healthy import tree → execution
proceeds far enough to hit a network boundary and times out (exit 0).
Broken lazy import → `ImportError` / `ModuleNotFoundError` from inside
the executor body (exit 1). Other downstream errors (auth, validation)
pass — those are caught by adapter-level tests, not this gate.

Stub `(RequestContext, EventQueue)` is built from the real a2a-sdk so
SendMessageRequest/RequestContext constructor changes also surface as
import-tree failures (the regression class also includes "SDK
refactored mid-publish"). The stub-build itself is wrapped — if it
raises, that's a smoke fail too.

Phase 2 (separate PR, molecule-ci) wires this into
publish-template-image.yml so the publish gate runs the boot smoke
against every template image before pushing the tag.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 21:21:18 -07:00
Hongming Wang
067ad83ce5 feat(config): add explicit provider: field alongside model:
Adds a top-level `provider` slug to WorkspaceConfig and RuntimeConfig so
adapters can route to a specific gateway without re-implementing
slug-prefix parsing across hermes / claude-code / codex.

Resolution chain in load_config (mirrors how `model` resolves):

  1. ``LLM_PROVIDER`` env var — what canvas Save+Restart sets so the
     operator's Provider dropdown choice survives a CP-driven restart
     (the regenerated /configs/config.yaml drops most user fields).
  2. Explicit YAML ``provider:`` — operator pinned it in the file.
  3. Derive from the model slug prefix for backward compat:
       ``anthropic:claude-opus-4-7`` → ``anthropic``
       ``minimax/abab7-chat-preview`` → ``minimax``
       bare model names → ``""`` (let the adapter decide).

`runtime_config.provider` falls back to the top-level resolved
provider, the same shape PR #2438 added for `runtime_config.model`.

Why a separate field at all (we already parse the slug):
  - Custom model aliases without a recognizable prefix need an
    explicit signal — the canvas Provider dropdown writes it.
  - Adapters were each rolling their own slug-parse (hermes's
    derive-provider.sh, claude-code's adapter-default branch, etc.);
    one resolution point in load_config kills that drift class.
  - Canvas needs a stable storage field that doesn't get clobbered
    every time the user picks a new model.

Backward-compatible: when `provider:` is absent, slug derivation
keeps every existing config.yaml working without a migration.

PR-1 of a multi-PR stack (Option B from RFC discussion). Subsequent
PRs plumb the field through workspace-server env, CP user-data,
adapters (hermes prefers explicit over derive-provider.sh), and
canvas Provider dropdown UI.

Tests cover all four resolution paths + runtime_config inheritance:
  - test_provider_default_empty_when_bare_model
  - test_provider_derived_from_colon_slug
  - test_provider_derived_from_slash_slug
  - test_provider_yaml_explicit_wins_over_derived
  - test_provider_env_override_beats_yaml_and_derived
  - test_runtime_config_provider_yaml_wins_over_top_level
  - test_provider_default_from_default_model

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 20:47:09 -07:00
Hongming Wang
5ad41f63ce
Merge pull request #2438 from Molecule-AI/fix/runtime-config-model-fallback-clean
fix(config): runtime_config.model falls back to top-level model
2026-05-01 03:31:20 +00:00
Hongming Wang
0070d0bd59 fix(config): runtime_config.model falls back to top-level model
External feedback (2026-04-30): "Provisioner doesn't read model from
config.yaml and doesn't set MODEL env var. Without MODEL, the adapter
defaults to sonnet and bypasses the mimo routing." Confirmed accurate
for SaaS workspaces.

Trace: claude-code-default/adapter.py reads `runtime_config.model or
"sonnet"` (and hermes reads HERMES_DEFAULT_MODEL via install.sh, which
IS plumbed). For claude-code there's nothing — workspace/config.py
loaded `runtime_config.model` only from YAML, ignoring MODEL_PROVIDER
env. The CP user-data script regenerates /configs/config.yaml at every
boot with only `name`, `runtime`, `a2a` keys (intentionally minimal so
it doesn't carry stale state) — so any user-set runtime_config.model
is wiped on every restart, and the adapter falls back to "sonnet" even
when the user picked Opus in the canvas Config tab.

Fix: when YAML omits runtime_config.model, fall back to the top-level
resolved `model`, which already honors MODEL_PROVIDER env override.
One-line in workspace/config.py. Now MODEL_PROVIDER → top-level model
→ runtime_config.model → adapter sees the user's selection. Sticky
across CP-driven restarts; the canvas Save+Restart loop works as
intended for every runtime, not just hermes.

Tests:
  test_runtime_config_model_falls_back_to_top_level — top-level set, runtime_config empty → fallback wins
  test_runtime_config_model_yaml_wins_over_top_level — YAML explicit → fallback skipped (precedence)
  test_runtime_config_model_picks_up_env_via_top_level — full canvas Save+Restart simulation: env → top-level → runtime_config.model

Negative-control verified: removing the `or model` flips both fallback
tests red with the expected "" vs expected-model mismatch; restoring
flips them green. The yaml-wins test passes either way (correctly,
because precedence is preserved).

Replaces closed PR #2435 — that PR's commit was on a contaminated
branch and accidentally captured unrelated WIP changes (build script
+ a2a_mcp_server refactor) instead of this fix. Self-review caught it
and closed the PR. This branch is clean off main + diff verified
before push.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 20:28:50 -07:00
Hongming Wang
0a3ec53f34 feat(mcp): notifications/claude/channel for push-feel inbox UX
Adds a notification seam to the universal molecule-mcp wheel so push-
notification-capable MCP hosts (Claude Code today; any compliant
client tomorrow) get inbound A2A messages as conversation interrupts
instead of having to poll wait_for_message / inbox_peek.

Wire-up:
- inbox.py: module-level _NOTIFICATION_CALLBACK + set_notification_callback()
  Fires from InboxState.record() AFTER lock release, with same dict
  shape inbox_peek returns. Best-effort — a raising callback never
  prevents the message from landing in the queue.
- a2a_mcp_server.py: _build_channel_notification() pure helper +
  bridge wiring in main() that schedules notifications via
  asyncio.run_coroutine_threadsafe (poller is a daemon thread, MCP
  loop is asyncio).
- Method name 'notifications/claude/channel' matches the contract
  documented in molecule-mcp-claude-channel/server.ts:509.
- wheel_smoke.py: pin set_notification_callback as a published name,
  same regression class as the 0.1.16 main_sync incident.

Pollers (wait_for_message / inbox_peek) keep working unchanged for
runtimes without notification support.

Tests: 6 new in test_inbox.py (callback fires once on record, dedupe
short-circuits before fire, raising cb doesn't break inbox, set/clear
semantics), 5 new in test_a2a_mcp_server.py (method name pin, content
mapping, meta routing, no-id JSON-RPC notification spec, missing-
field tolerance). All 59 combined tests pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 20:10:01 -07:00
Hongming Wang
c4bb803329 feat(mcp_cli): agent_card from env vars (capability discovery)
External molecule-mcp runtimes register with hardcoded agent_card.name
= molecule-mcp-{id[:8]} and skills=[]. That made every external
workspace look identical on the canvas and gave peer agents calling
list_peers no signal beyond name — they had to guess capabilities.

Three new env vars let the operator declare identity + capabilities
without code changes:

  * MOLECULE_AGENT_NAME — display name on canvas (default unchanged)
  * MOLECULE_AGENT_DESCRIPTION — one-line description (default empty)
  * MOLECULE_AGENT_SKILLS — comma-separated skill names

Comma-separated skills get expanded to {"name": "..."} objects — the
minimum shape that satisfies both shared_runtime.summarize_peers
(reads s["name"]) AND canvas SkillsTab.tsx (id falls back to name).

Strict-superset behaviour: when no env vars are set, agent_card
matches the previous hardcoded value exactly. No regression for
operators who haven't migrated.

Why this matters end-to-end:
  * Canvas Skills tab now shows each declared skill as a chip
  * Peer agents calling list_peers see {name, skills} per peer and
    can route delegations to the right specialist
  * Same applies to the canvas Details tab + workspace card hover

Tests cover: defaults match prior behaviour; name override; CSV →
skill objects; whitespace stripping + empty entries dropped;
description omitted when unset (keeps wire payload minimal);
whitespace-only name falls back to default; end-to-end through
_platform_register's payload.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 18:57:39 -07:00
Hongming Wang
210f6e066a
Merge pull request #2424 from Molecule-AI/fix/in-container-heartbeat-persists-inbound-secret
fix(workspace): in-container heartbeat persists platform_inbound_secret
2026-05-01 01:36:52 +00:00
Hongming Wang
d887ce8e96 fix(mcp_cli): escalate consecutive heartbeat 401s with re-onboard guidance
The universal molecule-mcp wheel runs in a daemon thread, posting
/registry/heartbeat every 20s. When the workspace gets deleted
server-side (DELETE /workspaces/:id), the platform revokes all tokens
for that workspace. Previous behaviour: heartbeat would 401 forever,
log at WARNING per tick, no actionable signal anywhere.

Failure mode hit on hongmingwang tenant 2026-04-30: workspace
a1771dba was deleted at some prior time, the channel-bridge .env
still pointed at it, MCP tools 401-ed silently with the operator
having no idea why. The register-time path at mcp_cli.py:104-111
already does loud + actionable for 401 (sys.exit(3) with regenerate-
from-canvas-Tokens text) — extend the same pattern to the heartbeat.

Behaviour:
  * count < 3: WARNING per tick (could be transient blip)
  * count == 3: ERROR with re-onboard instructions, names the dead
    workspace_id, points at the canvas Tokens tab
  * count > 3 and every 20 ticks (~7 min): re-log ERROR so a session
    that started after the first ERROR still catches it

5xx and other non-auth HTTP errors do NOT increment the auth-failure
counter — that would mislead the operator (e.g. a server blip would
trigger "token revoked" when the token is fine).

Tests cover: single 401 stays at WARNING; 3 consecutive 401s escalate
to ERROR with the right keywords; 403 treated identically; recovery
via 200 resets the counter; 5xx never triggers the auth path.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 18:26:35 -07:00
Hongming Wang
98845c8f42 fix(workspace): in-container heartbeat persists platform_inbound_secret
Follow-up to PR #2421. The standalone wrapper (mcp_cli.py) got
heartbeat-time secret persistence in #2421, but the in-container
heartbeat (workspace/heartbeat.py) was missed — and that's the path
every workspace EC2 actually runs. Result: hongmingwang Claude Code
agent stayed 401-forever on chat upload after this morning's deploy
because the workspace's runtime never picked up the lazy-healed
secret.

The in-container _loop now captures the heartbeat response and calls
the same _persist_inbound_secret_from_heartbeat helper used by the
standalone path, on both the first POST and the 401-retry POST.
Defensive on every error (non-JSON, non-dict, empty, save failure) —
liveness contract trumps secret persistence.

Tests pin: happy path, absent secret, empty string, non-JSON body,
non-dict body, save_inbound_secret OSError, end-to-end loop.
2026-04-30 18:18:10 -07:00
Hongming Wang
c733454a56
Merge pull request #2421 from Molecule-AI/fix/heartbeat-delivers-inbound-secret
fix(workspace): deliver platform_inbound_secret on every heartbeat
2026-05-01 00:54:00 +00:00
Hongming Wang
993f8c494e refactor(workspace-runtime): send_a2a_message takes peer_id, validates UUID
Two cleanups stacked on PR #2418:

1. Refactor `send_a2a_message(target_url, msg)` →
   `send_a2a_message(peer_id, msg)`. After #2418 every caller passes
   `${PLATFORM_URL}/workspaces/{peer_id}/a2a` — the function's
   parameter pretended to accept arbitrary URLs but in practice only
   one shape is meaningful. Owning URL construction inside the
   function makes the contract honest and centralises the peer-id
   validation introduced below.

2. Add `_validate_peer_id` UUID-shape check at the trust boundary.
   `discover_peer` and `send_a2a_message` are the entry points where
   agent-controlled strings flow into URL paths; rejecting non-UUID
   input at this layer eliminates the URL-interpolation class of
   bug (`workspace_id="../admin"` etc.) regardless of how the rest
   of the codebase interpolates ids elsewhere. Auth was already
   gating malicious access — this is consistency + clear failure
   over silent platform 4xx.

In-container tests cover positive UUIDs, malformed input
(``"ws-abc"``, ``"../admin"``, empty), and the contract that
``tool_delegate_task`` hands the peer_id to ``send_a2a_message``
without building URLs itself.

Live-verified: external delegation 8dad3e29 → 97ac32e9 returned
"refactor verified" from Claude Code Agent through the refactored
code; ``_validate_peer_id`` rejects ``"ws-abc"`` and ``"../admin"``
and accepts canonical UUIDs.

Stacked on PR #2418 (proxy-routing fix). Will rebase onto staging
once #2418 merges.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 17:43:01 -07:00
Hongming Wang
a5c5139e3a fix(workspace): deliver platform_inbound_secret on every heartbeat
Heartbeat now echoes the workspace's platform_inbound_secret on every
beat (mirroring /registry/register), and the molecule-mcp client
persists it to /configs/.platform_inbound_secret on receipt.

Symptom (2026-04-30, hongmingwang tenant): chat upload returned 503
"workspace will pick it up on its next heartbeat" and then 401 on
retry — permanent until workspace restart. The 503 message was a lie:
heartbeat used to discard the platform_inbound_secret entirely; only
register delivered it, and register fires once at startup.

Server (Go):
  - Heartbeat handler reuses readOrLazyHealInboundSecret (the same
    helper chat_files + register use), so heartbeat-time recovery
    covers the rotate / mid-life NULL-column case the existing
    register-time heal can't reach.
  - Failure is non-fatal: liveness contract trumps secret delivery,
    chat_files retries lazy-heal on its own next request.

Client (Python):
  - _persist_inbound_secret_from_heartbeat parses the heartbeat 200
    response and persists via platform_inbound_auth.save_inbound_secret.
  - All exceptions swallowed — heartbeat liveness > secret persistence;
    next tick (≤20s) retries.

Tests:
  - Server: pin secret-present, lazy-heal-mint-on-NULL, and heal-
    failure-omits-field branches.
  - Client: pin persist-on-200, skip-on-empty, skip-on-non-dict-body,
    skip-on-401, swallow-save-OSError.
2026-04-30 17:36:33 -07:00
Hongming Wang
aefb44aff2 fix(workspace-runtime): route delegate_task through platform A2A proxy
tool_delegate_task was POSTing directly to peer["url"], which is
the Docker-internal hostname (e.g. http://ws-X-Y:8000) for in-
container peers. External callers — the standalone molecule-mcp
wrapper running on an operator's laptop — get [Errno 8] nodename
nor servname every single delegation, breaking the universal-MCP
path's last "ride the same code as in-container" claim.

The platform's /workspaces/:peer-id/a2a proxy endpoint already
handles internal forwarding for in-container peers AND is the only
path external runtimes can use. Unify on it: in-container callers
pay one extra HTTP hop on the same Docker bridge (microseconds);
external callers get a working delegation path for the first time.

discover_peer is still called for access-control + online-status
detection — only the routing target changes. Verified live on
2026-04-30 against workspace 8dad3e29 (external mac runtime) →
97ac32e9 (Claude Code Agent in-container): direct POST returned
ConnectError, proxy POST returned "acknowledged from claude code
agent" as requested.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 17:13:50 -07:00
Hongming Wang
d061642cfc test(inbox): bind side-effecting pop() before assert
CodeQL flagged the bare `assert state.pop(...) is None` — under
`python -O` asserts are stripped, which would skip the call entirely
and the test would silently pass without exercising the code. Bind
the result first so the call always runs.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 16:39:45 -07:00
Hongming Wang
b47d4ceb00 feat(workspace-runtime): add inbox polling for standalone molecule-mcp path
The universal MCP server (a2a_mcp_server.py) was outbound-only — agents
in standalone runtimes (Claude Code, hermes, codex, etc.) could
delegate, list peers, and write memories, but never observed the
canvas-user or peer-agent messages addressed to them. This blocked
"constantly responding" loops without forcing operators back onto a
runtime-specific channel plugin.

This PR closes the inbound gap with a poller-fed in-memory queue and
three new MCP tools:

  - wait_for_message(timeout_secs?) — block until next message arrives
  - inbox_peek(limit?)              — list pending messages (non-destructive)
  - inbox_pop(activity_id)          — drop a handled message

A daemon thread polls /workspaces/:id/activity?type=a2a_receive every
5s, fills the queue from the cursor (since_id), and persists the cursor
to ${CONFIGS_DIR}/.mcp_inbox_cursor so a restart doesn't replay backlog.
On 410 (cursor pruned) we fall back to since_secs=600 for a bounded
recovery window. Activity-row → InboxMessage extraction mirrors the
molecule-mcp-claude-channel plugin's extractText (envelope shapes #1-3
+ summary fallback).

mcp_cli.main starts the poller alongside the existing register +
heartbeat threads. In-container runtimes (which have push delivery via
canvas WebSocket) skip activation, so inbox tools return an
informational "(inbox not enabled)" message instead of double-delivery.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 16:32:48 -07:00
Hongming Wang
b54ceb799f fix: address 5-axis review findings on PR #2413
Critical:
- ExternalConnectModal.tsx: filledUniversalMcp substitution searched
  for WORKSPACE_AUTH_TOKEN but the snippet's placeholder is now
  MOLECULE_WORKSPACE_TOKEN (changed in the previous polish commit
  876c0bfc). Operators copy-pasting the MCP tab would have gotten a
  literal "<paste from create response>" instead of the token. Fix
  the substitution to match the new placeholder name.

Important:
- mcp_cli._platform_register: 401/403 from initial register now hard-
  exits with code 3 + an actionable stderr message pointing the
  operator at the canvas Tokens tab. Pre-fix: warning log + continue,
  which made a bad-token startup silently fail (heartbeat 401's
  forever, every tool call also 401's, no clear surfacing in the
  operator's MCP client). 500/503 still log + continue (transient
  platform blips shouldn't abort the MCP loop).
- a2a_mcp_server.cli_main docstring: removed stale claim that this is
  the wheel's console-script entry-point target. The actual target is
  mcp_cli.main since 2026-04-30. Wheel-smoke pins both names so the
  functionality was correct, but the doc was lying.

Test coverage: 3 new mcp_cli tests:
  - register 401 exits code=3 + stderr mentions canvas Tokens tab
  - register 403 (C18 hijack rejection) takes same path
  - register 500/503 does NOT exit — only auth errors hard-fail

Findings deferred to follow-up (acceptable per review rubric):
  - Code dedup across mcp_cli / heartbeat.py / molecule_agent SDK
  - Pooled httpx.Client for connection reuse
  - Heartbeat exponential backoff
  - Token-resolution ordering parity (env-first vs file-first)
    between mcp_cli.main and platform_auth.get_token

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 16:06:59 -07:00
Hongming Wang
427300f3a4 feat: make molecule-mcp standalone (built-in register + heartbeat) + recover awaiting_agent on heartbeat
Two paired fixes that together let an external operator run a single
process (molecule-mcp) and see their workspace come up online in the
canvas — the bug surfaced live when status stuck at "awaiting_agent /
OFFLINE" despite an active MCP server.

Platform side (workspace-server/internal/handlers/registry.go):
  Heartbeat handler already auto-recovers offline → online and
  provisioning → online, but NOT awaiting_agent → online. Healthsweep
  flips stale-heartbeat external workspaces TO awaiting_agent, and
  with no recovery path the workspace stays "OFFLINE — Restart" in the
  canvas forever. Add the symmetric branch: if currentStatus ==
  "awaiting_agent" and a heartbeat arrives, flip to online + broadcast
  WORKSPACE_ONLINE. Mirrors the existing offline/provisioning patterns
  exactly. Test: TestHeartbeatHandler_AwaitingAgentToOnline asserts
  the SQL UPDATE fires with the awaiting_agent guard clause.

Wheel side (workspace/mcp_cli.py):
  molecule-mcp was outbound-only — operators had to run a separate
  SDK process to register + heartbeat. Now mcp_cli.main():
    1. Calls /registry/register at startup (idempotent upsert flips
       status awaiting_agent → online via the existing register path).
    2. Spawns a daemon thread that POSTs /registry/heartbeat every
       20s. 20s is comfortably under the healthsweep stale window so
       a single missed beat doesn't cause status churn.
    3. Runs the MCP stdio loop in the foreground.

  Both calls set Origin: ${PLATFORM_URL} so the SaaS edge WAF accepts
  them. Threaded heartbeat (not asyncio) chosen because it doesn't
  need to share an event loop with the MCP stdio server — daemon=True
  cleanly dies when the operator's runtime exits.

  MOLECULE_MCP_DISABLE_HEARTBEAT=1 escape hatch lets in-container
  callers (which have heartbeat.py running already) reuse the entry
  point without double-heartbeating. Default is enabled.

End-to-end verification (live, against
hongmingwang.moleculesai.app, workspace 8dad3e29-...):
  pre-fix:  status=awaiting_agent → canvas shows OFFLINE forever
  post-fix: ran `molecule-mcp` for 5s standalone → canvas state:
            status=online runtime=external agent=molecule-mcp-8dad3e29

Test coverage: 7 new mcp_cli tests (register-at-startup, heartbeat-
thread-spawned, disable-env-skips-both, env-and-file token resolution,
register payload shape, heartbeat endpoint + headers); 1 new platform
test (awaiting_agent → online recovery). Full workspace + handlers
suites green: 1355 Python, full Go handlers passing.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 15:42:44 -07:00
Hongming Wang
74c5e0d7a8 fix(workspace-runtime): add Origin header so SaaS edge WAF accepts MCP tool calls
Discovered while smoke-testing the molecule-mcp external-runtime path
against a live tenant (hongmingwang.moleculesai.app). Every tool call
that hit /workspaces/* or /registry/*/peers returned 404 — but
/registry/register and /registry/heartbeat returned 200. Diagnosis:
the tenant's edge WAF requires a same-origin header. Without it,
unhandled paths get silently rewritten to the canvas Next.js app,
which has no /workspaces or /registry/:id/peers route and returns an
empty 404. The molecule-mcp-claude-channel plugin already sets this
header (server.ts:271-276); the workspace runtime never did because
in-container PLATFORM_URLs (Docker network) aren't behind the WAF.

Fix: extend platform_auth.auth_headers() to include
Origin: ${PLATFORM_URL} whenever PLATFORM_URL is set. Inside-container
behavior is unchanged (the WAF is path-irrelevant for the internal
hostnames). External-runtime calls now thread the WAF correctly.

Verification (live, against a freshly-registered external workspace):
  pre-fix:  get_workspace_info → "not found", list_peers → 404
  post-fix: get_workspace_info → full workspace JSON,
            list_peers → "Claude Code Agent (ID: 97ac32e9..., status: online)"

This is the kind of bug unit tests can never catch — caught only by
running the wheel against the real tenant. Memory:
feedback_always_run_e2e.md.

Test coverage: 4 new tests in test_platform_auth.py — Origin alone
when no token + Origin + Authorization both, no-PLATFORM_URL falls
through to original empty-dict behavior, env-token path with Origin.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 15:30:15 -07:00
Hongming Wang
169e284d57 feat(workspace-runtime): expose universal MCP server to runtime=external operators
Ship the baseline universal MCP path that any external runtime (Claude
Code, hermes, codex, anything that speaks MCP stdio) can use, before
optimizing per-runtime channels. Today the workspace MCP server only
spins up inside the container; external operators have no way to call
the 8 platform tools (delegate_task, list_peers, send_message_to_user,
commit_memory, etc.) from outside.

Three additive changes:

1. **`platform_auth.get_token()` env-var fallback** — adds
   `MOLECULE_WORKSPACE_TOKEN` as a fallback when no
   `${CONFIGS_DIR}/.auth_token` file exists. File-first preserves
   in-container behavior unchanged. External operators (no /configs
   volume) now have a way to supply the token without faking the
   filesystem layout.

2. **`molecule-mcp` console script** — adds a new entry point in the
   published `molecule-ai-workspace-runtime` PyPI wheel. Operators run
   `pip install molecule-ai-workspace-runtime`, set 3 env vars
   (WORKSPACE_ID, PLATFORM_URL, MOLECULE_WORKSPACE_TOKEN), and register
   the binary in their agent's MCP config. `mcp_cli.main` is a thin
   validator wrapper — it checks env BEFORE importing the heavy
   `a2a_mcp_server` module so a misconfigured first-run gets a friendly
   3-line error instead of a 20-line module-level RuntimeError
   traceback.

3. **Wheel smoke gate** — extends `scripts/wheel_smoke.py` to assert
   `cli_main` and `mcp_cli.main` are importable. Same regression class
   as the 0.1.16 main_sync incident: a silent rename or unrewritten
   import here would break every external operator on the next wheel
   publish (memory: feedback_runtime_publish_pipeline_gates.md).

Test coverage:
- `tests/test_platform_auth.py` — 8 new tests for the env-var fallback:
  file-priority, env-fallback, whitespace handling, cache, header
  construction, empty-env-as-unset.
- `tests/test_mcp_cli.py` — 8 new tests for the validator: each
  required var separately, file-or-env satisfies token requirement,
  whitespace-only env treated as missing, help mentions canvas Tokens
  tab.
- Full `workspace/tests/` suite green: 1346 passed, 1 skipped.
- Local end-to-end: built wheel, installed in venv, ran `molecule-mcp`
  with no env → friendly error; with env → MCP server starts.

Why now / why this shape: user redirect was "support the baseline
first so all runtimes can use, then optimize". A claude-only MCP
channel leaves hermes/codex/third-party operators broken on
runtime=external. This PR ships the runtime-agnostic baseline; per-
runtime polish (claude-channel push delivery, hermes-native
bindings) is a follow-up PR. PR #2412 fixed the partner bug where
canvas Restart silently revoked the operator's token — the two
together unblock the external-runtime story end-to-end.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 15:20:19 -07:00
Hongming Wang
3b34dfefbc feat(workspace): surface peer-discovery failure reason instead of "may be isolated"
Closes #2397. Today, every empty-peer condition (true empty, 401/403, 404,
5xx, network) collapses to a single message: "No peers available (this
workspace may be isolated)". The user has no way to tell whether they need
to provision more workspaces (true isolation), restart the workspace
(auth), re-register (404), page on-call (5xx), or check network (timeout) —
five different operator actions, one ambiguous string.

Wire:
  - new helper get_peers_with_diagnostic() in a2a_client.py returns
    (peers, error_summary). error_summary is None on 200; a short
    actionable string on every other branch.
  - get_peers() now shims through it so non-tool callers (system-prompt
    formatters) keep the bare-list contract.
  - tool_list_peers() switches to the diagnostic helper and surfaces the
    actual reason. The "may be isolated" string is removed; true empty
    now reads "no peers in the platform registry."

Tests:
  - TestGetPeersWithDiagnostic: 200, 200-empty, 401, 403, 404, 5xx,
    network exception, 200-but-non-list-body, and the bare-list-shim
    regression guard.
  - TestToolListPeers: each diagnostic branch surfaces its reason +
    explicit assertion that "may be isolated" is gone.

Coverage 91.53% (floor 86%). 122 a2a tests pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 11:09:26 -07:00
Hongming Wang
899a2231d6 test(platform_auth): module-functions signature snapshot drift gate
Pin the 5 public functions adapters and the runtime hot-path import
through ``from platform_auth import``:

  - ``auth_headers`` — every outbound httpx call merges this in
  - ``self_source_headers`` — A2A peer + self-message header builder
  - ``get_token`` — main.py reads on boot to decide register-vs-resume
  - ``save_token`` — main.py persists the platform-issued token
  - ``refresh_cache`` — 401-retry path drops in-process cache (#1877)

A grep across workspace/ shows 14+ runtime modules import these:
main.py, heartbeat.py, a2a_client.py, a2a_tools.py, consolidation.py,
events.py, executor_helpers.py (3 sites), molecule_ai_status.py,
builtin_tools/memory.py (3 sites), builtin_tools/temporal_workflow.py
(2 sites). Renaming any of the five (e.g. ``auth_headers`` →
``bearer_headers``) makes every one of those imports raise ImportError
at workspace boot — the failure surface is deep in heartbeat init,
nowhere near the rename site.

Same drift class as the BaseAdapter signature snapshot (#2378, #2380),
skill_loader gate (#2381), runtime_wedge gate (#2383). Reuses the
``_signature_snapshot.py`` helpers shipped in #2381.

Defense-in-depth: ``test_snapshot_has_required_functions`` asserts
the five names are still present, so removing one even with a
synchronized snapshot edit forces an explicit edit here with a
justification.

``clear_cache`` is intentionally NOT in the snapshot — it's a
test-only helper. Production code MUST NOT depend on it.

Verified red on deliberate rename: ``auth_headers`` →
``bearer_headers`` produces a clean diff of the missing function in
the failure message. Restored before commit.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 08:41:42 -07:00
Hongming Wang
70176e6c8f test(runtime_wedge): module-functions signature snapshot drift gate
BaseAdapter docstring tells adapter authors:

> ``runtime_wedge.mark_wedged()`` / ``clear_wedge()`` — flip the
> workspace to ``degraded`` + auto-recover when your SDK hits a
> non-recoverable error class. Import directly from ``runtime_wedge``;
> the heartbeat forwards the state to the platform automatically.

That's a contract — adapter templates depend on the four module-level
functions (``is_wedged``, ``wedge_reason``, ``mark_wedged``,
``clear_wedge``) being importable by those exact names with those
exact signatures. Renaming any silently breaks every adapter that
calls them: the import resolves the module fine, the
``AttributeError`` only surfaces when the adapter actually hits its
first SDK error — long after the rename merges.

Same drift class as #2378 / #2380 / #2381 (BaseAdapter, skill_loader)
applied to the module-level function surface.

Changes:

  - tests/_signature_snapshot.py gains build_module_functions_record.
    Walks a module's public top-level functions, optionally filtered
    to a specific name list (used here — runtime_wedge has internal
    helpers like reset_for_test that intentionally aren't part of
    the contract). Skips re-exports via __module__ check so a
    `from foo import bar` doesn't pollute the snapshot.
  - tests/test_runtime_wedge_signature.py snapshots the four
    contract functions. Plus a defense-in-depth required-functions
    test that catches removal even when source + snapshot are
    updated together.

Verified: deliberately renaming `mark_wedged` → `mark_wedged_RENAMED`
trips the gate with full snapshot diff in the failure message.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 07:01:10 -07:00
Hongming Wang
e336688278 test: extract shared signature-snapshot helpers + skill_loader gate
Two changes in one PR (tightly coupled — the second wouldn't make
sense without the first):

1. Hoist the inspect-based snapshot helpers out of
   test_adapter_base_signature.py into tests/_signature_snapshot.py
   so future surfaces don't copy-paste introspection logic.

   - build_class_signature_record(cls): walks public methods,
     unwraps static/class/abstract methods, returns a stable
     {class, methods: [...]} dict.
   - build_dataclass_record(cls): walks dataclass fields via
     dataclasses.fields(), returns {name, frozen, fields: [...]}.
   - compare_against_snapshot(actual, path): writes-on-first-run +
     diff-on-drift, with both expected and actual JSON in failure
     message.

   test_adapter_base_signature.py is rewritten to use the helpers;
   the existing snapshot file is byte-identical (no behavior change).

2. New gate: tests/test_skill_loader_signature.py covers the
   public dataclasses exported from skill_loader/loader.py:

   - SkillMetadata: every adapter pattern-matches on .runtime for
     skill-compat filtering. Renaming this field would silently
     break per-adapter skill loading — the loader still returns
     objects, but adapters' `if "*" in skill.metadata.runtime`
     raises AttributeError at workspace boot.
   - LoadedSkill: returned in SetupResult.loaded_skills.

   Includes test_snapshot_has_required_skill_metadata_fields
   defense-in-depth: ensures the runtime / id / name / description
   fields stay even if both source and snapshot are updated together.

Verified: deliberately renaming SkillMetadata.runtime trips the
gate with full snapshot diff in the failure message.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 06:27:20 -07:00