Demo-day preparation bundle for the funding demo (~2026-05-06). Adds:
- scripts/demo-freeze.sh — captures current ghcr.io
workspace-template-* :latest digests for all 8 runtimes, then
disables both cascade vectors that could re-tag :latest mid-demo:
publish-runtime.yml in molecule-core (PATH 1 — staging push to
workspace/** auto-bumps the wheel and fans out to 8 templates) and
publish-image.yml in each of the 8 template repos (PATH 2 — direct
template repo merge re-tags :latest). Defaults to dry-run; requires
--execute to apply. Writes both digest + workflow receipts to
scripts/demo-freeze-snapshots/.
- scripts/demo-thaw.sh — re-enables every workflow demo-freeze.sh
disabled, keyed off the receipt timestamp. Defaults to executing
(the inverse safety polarity from freeze, where the destructive
default is dry-run). --dry-run prints without applying.
- scripts/demo-day-runbook.md — operator runbook indexing the six
rollback levers (platform image rollback, template image rollback,
tenant redeploy, workspace delete, Railway rollback, Vercel
rollback) plus pre-warm timing and post-demo cleanup. Also covers
read-only diagnostics for "is this working?" moments and the
CP_ADMIN_API_TOKEN rotation step that must follow demo (the token
gets copy-pasted into shells during incident response).
- scripts/demo-freeze-snapshots/.gitignore — generated freeze
receipts are operational state, not source. Tracked .gitkeep so
the directory exists when the script writes to it.
Both scripts dry-run-tested locally. Did not exercise --execute since
that would actually disable production workflows mid-development.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Production incident on hongming.moleculesai.app 2026-05-01T18:30Z —
fresh-tenant signup chat upload returned 500 with the body
{"error":"failed to prepare uploads dir"}. Diagnosis required SSM
access to the workspace stderr to recover errno + actual path.
The root-cause fix lives in claude-code template entrypoint
(molecule-ai-workspace-template-claude-code#23 — pre-create the
.molecule subtree as root before gosu drops to agent). This change
is the diagnostic improvement: when mkdir fails for any reason in
the future (EACCES, ENOSPC, EROFS, etc.), the response carries
the errno + offending path so the operator inspecting browser
devtools sees the real cause without needing SSM.
Backwards compatible — top-level "error" key is unchanged so
existing canvas / external alert rules continue to match. New
fields are additive: path, errno, detail.
Test pins the diagnostic shape so a future struct refactor can't
silently drop these fields.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Option B PR-5. Canvas Config tab now exposes a Provider override input
that's adapter-driven from each runtime's template — no hardcoded
provider list in the canvas. PUT /workspaces/:id/provider on Save
when dirty; auto-restart suppression to avoid double-restart with
the model handler's own restart.
The dropdown's suggestion list comes from /templates →
runtime_config.providers (the field added in
molecule-ai-workspace-template-hermes PR #31). For templates that
haven't migrated to the explicit providers list yet, suggestions
derive from model[].id slug prefixes — still adapter-driven, just
inferred. This keeps existing templates working while platform team
migrates them one at a time.
workspace-server changes:
- Add Providers []string field to templateSummary JSON
- Parse runtime_config.providers in /templates handler
- 2 new tests pin the surfacing + omitempty behavior
canvas changes:
- Remove hardcoded PROVIDER_SUGGESTIONS constant
- Add provider/originalProvider state + PUT-on-save logic
- Add deriveProvidersFromModels() fallback helper
- Wire RuntimeOption.providers from /templates response
- 8 new tests pin the behavior end-to-end
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Mirror of PUT /model. Stores the provider slug as the LLM_PROVIDER
workspace secret so the canvas can update model + provider
independently — a user might keep the same model alias and switch
providers (route through a different gateway), or vice versa.
Forcing both into one endpoint imposes a single Save+Restart per
change; two endpoints let canvas update each as the user picks.
Plumbs through the existing chain: secret-load → envVars → CP
req.Env → user-data env exports → /configs/config.yaml (after
controlplane PR #364 lands the heredoc append).
Tests: 5 new cases mirroring SetModel/GetModel exactly — default
empty response, DB error, upsert with restart trigger, empty-clears,
invalid-UUID rejection.
Part of: Option B PR-2 (#196) — workspace-server plumbs LLM_PROVIDER
Stack: PR-1 schema (#2441 merged)
PR-2 (this) ws-server endpoint
PR-3 (#364 open) CP user-data persistence
PR-4 (pending) hermes adapter consume
PR-5 (pending) canvas Provider dropdown
#2429 review finding. The 410-Gone path issues a follow-up
`SELECT updated_at` after detecting status='removed'. If that query
fails (workspace row deleted between the two queries, transient DB
error, etc.), `removedAt` stays as Go's zero time and the JSON body
emits `"removed_at": "0001-01-01T00:00:00Z"` — a misleading timestamp
the client has to know to ignore.
Now we branch on `removedAt.IsZero()` and emit `null` for the failed
path. The actionable signal (the 410 + hint) is unchanged; only the
timestamp shape gets cleaner.
Pinned by `TestWorkspaceGet_RemovedReturns410WithNullRemovedAtOnTimestampFetchFailure`,
which simulates the row vanishing via `sqlmock`'s `WillReturnError(sql.ErrNoRows)`.
The original `_RemovedReturns410` test now also asserts that the
happy-path timestamp is a non-null value (was just checking the key
existed).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Follow-up A to PR #2449 — that PR taught the platform to return 410
Gone for status='removed' workspaces; this PR teaches get_workspace_info
to consume that signal.
Before: every non-200 collapsed into {"error": "not found"}, which
made the 2026-04-30 incident impossible to diagnose — the operator
KNEW the workspace_id existed (they'd just registered it), but the
runtime kept reporting "not found" for a deleted-but-not-purged row.
After: 410 produces a distinct {"error": "removed", "id", "removed_at",
"hint"} dict so callers (heartbeat-loop, channel bridge, dashboard
tools) can surface "your workspace was deleted, re-onboard" instead
of "not found". Falls back to a default hint if the platform body
isn't parseable so the actionable signal doesn't depend on body
shape parity.
Two new tests:
- TestGetWorkspaceInfo.test_410_returns_removed_with_hint
- TestGetWorkspaceInfo.test_410_with_unparseable_body_falls_back_to_default_hint
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Hermes-style declarative block grouping cadence + verbosity knobs into
one place. Schema-only in this PR — wiring into heartbeat.py and main.py
lands in PR-3 of the #119 stack.
Two fields with live consumers waiting:
- heartbeat_interval_seconds (default 30, clamped to [5, 300])
→ heartbeat.py:134 currently has hard-coded HEARTBEAT_INTERVAL = 30
- log_level (default "INFO", uppercased at parse)
→ main.py:465 currently has hard-coded log_level="info"
Clamp band [5, 300] is intentional: sub-5s flooded the platform during
IR-2026-03-11; >5min lets crashed workspaces look healthy long enough
to mask failure. Coerce at parse so adapters and heartbeat.py can read
the value without re-validating.
Tests pin defaults, explicit YAML override, partial override, and
parametrized clamp behavior (10 cases including garbage strings + None).
Part of: task #119 (adopt hermes-style architecture)
Stack: PR-1 schema → PR-2 event_log → PR-3 wire consumers → PR-4 skill compat
Defense-in-depth at the endpoint level. Previously, GET /workspaces/:id
returned 200 OK with `status:"removed"` in the body for deleted
workspaces — silent-fail UX hit on the hongmingwang tenant 2026-04-30:
the channel bridge / molecule-mcp wheel had a dead workspace_id + token
in .env, get_workspace_info returned 200 → caller assumed everything
was fine, then every subsequent /registry/* call 401d because tokens
were revoked, and operators had no idea their workspace was gone.
#2425 fixed the steady-state heartbeat path (escalate to ERROR after
3 consecutive 401s). This change is the startup-time defense — fail
loud when the operator first probes the workspace instead of waiting
for the heartbeat to sour.
The 410 body includes:
{error: "workspace removed", id, removed_at, hint: "Regenerate ..."}
Audit-trail consumers that need the body shape of a removed workspace
(admin views, "show me deleted workspaces" tooling) opt into the
legacy 200 + body via ?include_removed=true. Without this opt-in path
the audit trail becomes invisible at the API layer.
Two new tests pinned:
- TestWorkspaceGet_RemovedReturns410
- TestWorkspaceGet_RemovedWithIncludeQueryReturns200
Follow-ups in separate PRs:
- Update workspace/a2a_client.py get_workspace_info to surface
"removed" specifically rather than collapsing into "not found"
- Update channel bridge getWorkspaceInfo (server.ts) to detect 410
→ log clear "workspace was deleted, re-onboard" error
- Audit canvas/* + admin tooling consumers that may rely on the
legacy 200 + status:"removed" shape; switch them to the
?include_removed=true opt-in if needed
- Update docs (runtime-mcp.mdx Troubleshooting + external-agents.mdx
lifecycle table)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two follow-ups from the #2275 Phase 1 self-review:
1. `_SMOKE_TIMEOUT_SECS = float(os.environ.get(...))` was evaluated at
module load. main.py imports smoke_mode unconditionally — before
the is_smoke_mode() check — so a malformed
MOLECULE_SMOKE_TIMEOUT_SECS env value would SystemExit every
workspace boot, not just smoke runs. Wrapped in try/except with a
5.0 fallback. Probability of a typo'd env var hitting production
is low (it's a CI-only knob), but the footgun is removed entirely.
Regression test reloads the module under a malformed env value.
2. `_real_a2a_sdk_available()` caught (ImportError, AttributeError).
`from X import Y` raises ImportError when Y is missing on X — never
AttributeError. Dropped the unreachable branch.
No behavior change for the happy path.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The existing wheel-publish smoke (`wheel_smoke.py`) only IMPORTS
`molecule_runtime.main` at module scope. Lazy imports buried inside
`async def execute(...)` bodies (e.g. `from a2a.types import FilePart`)
NEVER evaluate at static-import time — they crash at first message
delivery in production.
The 2026-04-2x v0→v1 a2a-sdk migration shipped 5 such regressions in
templates that all looked fine at module-load smoke. This change adds
`smoke_mode.py` plus a `MOLECULE_SMOKE_MODE=1` short-circuit in
`main.py`: after `adapter.create_executor(...)`, the boot path invokes
`executor.execute(stub_ctx, stub_queue)` once with a 5s timeout
(`MOLECULE_SMOKE_TIMEOUT_SECS`). Healthy import tree → execution
proceeds far enough to hit a network boundary and times out (exit 0).
Broken lazy import → `ImportError` / `ModuleNotFoundError` from inside
the executor body (exit 1). Other downstream errors (auth, validation)
pass — those are caught by adapter-level tests, not this gate.
Stub `(RequestContext, EventQueue)` is built from the real a2a-sdk so
SendMessageRequest/RequestContext constructor changes also surface as
import-tree failures (the regression class also includes "SDK
refactored mid-publish"). The stub-build itself is wrapped — if it
raises, that's a smoke fail too.
Phase 2 (separate PR, molecule-ci) wires this into
publish-template-image.yml so the publish gate runs the boot smoke
against every template image before pushing the tag.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two fixes from /code-review-and-quality on PR #2445:
1. **KI-005 hierarchy check parity with /terminal**
HandleConnect runs the KI-005 cross-workspace guard before dispatch
(terminal.go:85-106): when X-Workspace-ID is set and != :id, validate
the bearer's workspace binding then call canCommunicateCheck. Without
this, an org-level token holder in tenant Foo can probe any
workspace's diagnostic state by guessing the UUID — same enumeration
vector KI-005 closed for /terminal in #1609. Per-workspace bearer
tokens are URL-bound by WorkspaceAuth, so the gap is org tokens
within the same tenant.
Fix: copy the same gate into HandleDiagnose, before the
instance_id SELECT.
Test: TestHandleDiagnose_KI005_RejectsCrossWorkspace stubs
canCommunicateCheck=false and confirms 403 fires before the DB
lookup (sqlmock's ExpectationsWereMet pins that we never reached
the SELECT COALESCE). Mirrors the existing
TestTerminalConnect_KI005_RejectsUnauthorizedCrossWorkspace.
2. **Race-free tunnel stderr capture (syncBuf)**
strings.Builder isn't goroutine-safe. os/exec spawns a background
goroutine that copies the subprocess's stderr fd to cmd.Stderr's
Write, so reading the buffer's String() from the request goroutine
on wait-for-port timeout while the tunnel may still be writing is
a data race that `go test -race` flags. Worst-case impact in
production is a garbled Detail string (not a crash), but the fix
is small.
Fix: wrap bytes.Buffer in a sync.Mutex (syncBuf type). Same
io.Writer interface, no API changes elsewhere.
3. **Nit cleanup**
- read-pubkey failure now reports as its own step name instead of
a duplicated "ssh-keygen" entry — disambiguates two different
failure modes that previously shared a name.
- Replaced numToString hand-rolled int-to-string with strconv.Itoa
in the test (no import savings reason existed).
Suite: 4 diagnose tests pass with -race; full handlers suite passes
in 3.95s. go vet clean.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
GET /workspaces/:id/terminal/diagnose runs the same per-stage pipeline as
/terminal (ssh-keygen → EIC send-key → tunnel → ssh) but non-interactively
and returns JSON. Each stage reports {name, ok, duration_ms, error,
detail}, plus a top-level first_failure naming the broken stage.
Why: when the canvas terminal silently disconnects ("Session ended" with
no error frame — the user-reported failure mode on hongmingwang's hermes
workspace), there is no remote-readable signal of WHICH stage failed.
The ssh client's stderr lives only in the workspace-server's stdout on
the tenant CP EC2 — invisible without shell access. /terminal can't
expose stderr cleanly because it has already upgraded to WebSocket
binary frames by the time ssh runs. /terminal/diagnose stays pure
HTTP/JSON, so the same auth (WorkspaceAuth + ADMIN_TOKEN fallback) gives
operators a one-call probe that splits "IAM broke" (send-ssh-public-key
fails) from "tunnel/SG broke" (wait-for-port fails) from "sshd auth
broke" (ssh-probe gets Permission denied) from "shell broke" (probe
exits non-zero with stderr).
Stages mirrored from handleRemoteConnect in terminal.go:
1. ssh-keygen ephemeral session keypair
2. send-ssh-public-key AWS EIC API push, IAM-gated
3. pick-free-port local port for the tunnel
4. open-tunnel aws ec2-instance-connect open-tunnel start
5. wait-for-port the tunnel actually listens (folds tunnel
stderr into Detail when it doesn't)
6. ssh-probe non-interactive `ssh ... 'echo MARKER'` that
confirms auth + bash + the marker round-trip
(CombinedOutput captures stderr verbatim —
this is the whole reason the endpoint exists)
Local Docker workspaces (no instance_id) get a smaller probe:
container-found + container-running. Same response shape so callers
don't need to branch.
Tests stub sendSSHPublicKey / openTunnelCmd / sshProbeCmd via the
existing package-level vars (same pattern as TestSSHCommandCmd_*) so
the test suite stays hermetic — no AWS, no network. The three new
tests pin: (a) routing to remote on instance_id present,
(b) routing to local on empty instance_id, (c) the operationally
critical case — full success through wait-for-port then a probe
failure surfaces ssh stderr in the ssh-probe step's Error/Detail
with first_failure="ssh-probe".
Auth: rides on existing WorkspaceAuth middleware. Operators with the
tenant ADMIN_TOKEN (fetched via /cp/admin/orgs/:slug/admin-token) can
probe any workspace without per-workspace token; same admin path as
the canvas dashboard reads workspace activity.
Response always returns HTTP 200 (success or step failure are both in
the JSON body) so callers don't need to branch on status code — the
endpoint either reports a first_failure or doesn't.
Resolves task #200, supports task #193 (workspace EC2 sshd
unresponsive — without this endpoint we couldn't pin the failure
stage from outside the tenant CP EC2).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The previous header said `unittest discover from the scripts/ root
walks recursively`, contradicting the workflow body which runs two
passes precisely because discover does NOT recurse without
__init__.py. Fixed self-review feedback on PR #2440.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds a top-level `provider` slug to WorkspaceConfig and RuntimeConfig so
adapters can route to a specific gateway without re-implementing
slug-prefix parsing across hermes / claude-code / codex.
Resolution chain in load_config (mirrors how `model` resolves):
1. ``LLM_PROVIDER`` env var — what canvas Save+Restart sets so the
operator's Provider dropdown choice survives a CP-driven restart
(the regenerated /configs/config.yaml drops most user fields).
2. Explicit YAML ``provider:`` — operator pinned it in the file.
3. Derive from the model slug prefix for backward compat:
``anthropic:claude-opus-4-7`` → ``anthropic``
``minimax/abab7-chat-preview`` → ``minimax``
bare model names → ``""`` (let the adapter decide).
`runtime_config.provider` falls back to the top-level resolved
provider, the same shape PR #2438 added for `runtime_config.model`.
Why a separate field at all (we already parse the slug):
- Custom model aliases without a recognizable prefix need an
explicit signal — the canvas Provider dropdown writes it.
- Adapters were each rolling their own slug-parse (hermes's
derive-provider.sh, claude-code's adapter-default branch, etc.);
one resolution point in load_config kills that drift class.
- Canvas needs a stable storage field that doesn't get clobbered
every time the user picks a new model.
Backward-compatible: when `provider:` is absent, slug derivation
keeps every existing config.yaml working without a migration.
PR-1 of a multi-PR stack (Option B from RFC discussion). Subsequent
PRs plumb the field through workspace-server env, CP user-data,
adapters (hermes prefers explicit over derive-provider.sh), and
canvas Provider dropdown UI.
Tests cover all four resolution paths + runtime_config inheritance:
- test_provider_default_empty_when_bare_model
- test_provider_derived_from_colon_slug
- test_provider_derived_from_slash_slug
- test_provider_yaml_explicit_wins_over_derived
- test_provider_env_override_beats_yaml_and_derived
- test_runtime_config_provider_yaml_wins_over_top_level
- test_provider_default_from_default_model
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two small follow-ups to the PR #2433 → #2436 → #2439 incident chain.
1) `import inbox # noqa: F401` in workspace/a2a_mcp_server.py was
misleading — `inbox` IS used (at the bridge wiring inside main()).
F401 means "imported but unused", which would mask a real future
F401 if the usage is removed. Drop the noqa, keep the explanatory
block comment about the rewriter's `import X` → `import mr.X as X`
expansion (and the `import X as Y` → `import mr.X as X as Y` trap
the comment exists to prevent re-introducing).
2) scripts/test_build_runtime_package.py — 17 unit tests covering
`rewrite_imports()` and `build_import_rewriter()` in
scripts/build_runtime_package.py. Until now the function had zero
coverage despite the entire wheel build depending on it. Tests
pin: bare-import aliasing, dotted-import preservation, indented
imports, from-imports (simple + dotted + multi-symbol + block),
the `import X as Y` rejection added in PR #2436 (with comment-
stripping + indented + comma-not-alias edge cases), allowlist
anchoring (`a2a` ≠ `a2a_tools`), and end-to-end reproduction
of the PR #2433 failing pattern + the #2436 fix pattern.
3) Wire scripts/test_*.py into CI by adding a second discover pass
to test-ops-scripts.yml. Top-level scripts/ tests live alongside
their target file (parallels the scripts/ops/ test layout); the
existing scripts/ops/ pass keeps running because scripts/ops/
has no __init__.py so a single discover from scripts/ root
doesn't recurse. Two passes is simpler than retrofitting
namespace packages. Path filter widened from `scripts/ops/**`
to `scripts/**` so PRs touching the build script trigger the
new tests.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The `PR-built wheel + import smoke` gate caught the broken wheel from
PR #2433 (`import inbox as _inbox_module` collision) but couldn't block
the merge because it isn't a required check on staging. Promoting it to
required is the right move per the runtime publish pipeline gates note
(2026-04-27 RuntimeCapabilities ImportError outage), but the existing
`paths: [workspace/**, scripts/...]` filter blocks PRs that don't touch
those paths from ever generating the check run — branch protection
would deadlock waiting on a check that never fires.
Refactor (same shape as e2e-api.yml's e2e-api job):
- Drop top-level `paths:` filter — workflow runs on every push/PR/
merge_group event.
- Add `detect-changes` job using dorny/paths-filter to compute the
`wheel=true|false` output.
- Collapse to ONE always-running `local-build-install` job named
`PR-built wheel + import smoke`. Per-step `if:` gates on the
detect output. PRs untouched by wheel-relevant paths emit a
no-op SUCCESS step ("paths filter excluded this commit") so the
check passes without rebuilding the wheel.
- merge_group + workflow_dispatch unconditionally `wheel=true` so
the queue always validates the to-be-merged state, regardless of
which PR composed it.
Why one-job-with-step-gates instead of two-jobs-sharing-name: SKIPPED
check runs block branch protection even when SUCCESS siblings exist
(verified PR #2264 incident, 2026-04-29). Single always-run job emits
exactly one SUCCESS check run regardless of paths filter.
Follow-up: open a separate PR adding `PR-built wheel + import smoke`
to the staging branch protection's required_status_checks.contexts
once this lands. Doing both in one PR risks the protection update
firing before the workflow refactor merges, deadlocking unrelated PRs.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
External feedback (2026-04-30): "Provisioner doesn't read model from
config.yaml and doesn't set MODEL env var. Without MODEL, the adapter
defaults to sonnet and bypasses the mimo routing." Confirmed accurate
for SaaS workspaces.
Trace: claude-code-default/adapter.py reads `runtime_config.model or
"sonnet"` (and hermes reads HERMES_DEFAULT_MODEL via install.sh, which
IS plumbed). For claude-code there's nothing — workspace/config.py
loaded `runtime_config.model` only from YAML, ignoring MODEL_PROVIDER
env. The CP user-data script regenerates /configs/config.yaml at every
boot with only `name`, `runtime`, `a2a` keys (intentionally minimal so
it doesn't carry stale state) — so any user-set runtime_config.model
is wiped on every restart, and the adapter falls back to "sonnet" even
when the user picked Opus in the canvas Config tab.
Fix: when YAML omits runtime_config.model, fall back to the top-level
resolved `model`, which already honors MODEL_PROVIDER env override.
One-line in workspace/config.py. Now MODEL_PROVIDER → top-level model
→ runtime_config.model → adapter sees the user's selection. Sticky
across CP-driven restarts; the canvas Save+Restart loop works as
intended for every runtime, not just hermes.
Tests:
test_runtime_config_model_falls_back_to_top_level — top-level set, runtime_config empty → fallback wins
test_runtime_config_model_yaml_wins_over_top_level — YAML explicit → fallback skipped (precedence)
test_runtime_config_model_picks_up_env_via_top_level — full canvas Save+Restart simulation: env → top-level → runtime_config.model
Negative-control verified: removing the `or model` flips both fallback
tests red with the expected "" vs expected-model mismatch; restoring
flips them green. The yaml-wins test passes either way (correctly,
because precedence is preserved).
Replaces closed PR #2435 — that PR's commit was on a contaminated
branch and accidentally captured unrelated WIP changes (build script
+ a2a_mcp_server refactor) instead of this fix. Self-review caught it
and closed the PR. This branch is clean off main + diff verified
before push.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The pre-existing TestSSHCommandCmd_BuildsArgv asserts the literal argv
slice. Adding `-o ConnectTimeout=10` shifted the slice — this commit
tracks the snapshot to match. The new behavior-based
TestSSHCommandCmd_ConnectTimeoutPresent (added in the prior commit)
keeps the invariant pinned without depending on argv ordering, so
future tweaks land in only one place even if more options are added.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
PR #2433 (notifications/claude/channel) shipped 'import inbox as
_inbox_module' inside a2a_mcp_server.py:main(). The build script's
import rewriter expands plain 'import inbox' to
'import molecule_runtime.inbox as inbox', so the original source
became 'import molecule_runtime.inbox as inbox as _inbox_module',
which is invalid Python.
Caught at the publish-runtime + PR-built-wheel-smoke gate (the
SyntaxError trace is in run 25200422679). The wheel didn't ship to
PyPI because publish-runtime's smoke-import step refused to install
it, but staging is currently sitting on a broken-build commit until
this fix-forward lands.
Changes:
- a2a_mcp_server.py: lift `import inbox` to top of file (rewriter
produces clean `import molecule_runtime.inbox as inbox`), call
inbox.set_notification_callback directly in main()
- build_runtime_package.py: rewrite_imports() now raises ValueError
when it sees 'import X as Y' for any X in the workspace allowlist,
instead of silently producing a syntax-error wheel. Operator gets
a clear actionable error at build time pointing at the offending
line + suggested rewrites ('from X import …' or plain 'import X').
The build-time gate (this PR's rewriter check) catches the regression
class earlier than the smoke-time gate (PR #2433's failure). Adding
'PR-built wheel + import smoke' to staging branch protection's
required checks is filed separately so this class doesn't merge again.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
When the workspace EC2's sshd is unresponsive (mid-restart, SG drop,
AMI without ec2-instance-connect), the canvas's xterm shows the user's
typed bytes echoed back by the workspace-server's *local* PTY (cooked +
echo mode before ssh sets it raw post-handshake) and then closes
silently when Cloudflare's idle WebSocket timer fires (~100s) — with no
"Connection refused" or "Permission denied" output ever reaching the
user. This is what hongmingwang's hermes terminal looked like 2026-04-30
right after the heartbeat-fix redeploy: status="online" but the shell
appeared dead.
Caught reproducibly by holding a fresh /workspaces/<id>/terminal
WebSocket open for 60s — server sent zero frames except the local-PTY
echo of one keystroke typed at t=8s. ssh was hung at handshake; bash
never saw the byte.
Fix: add `-o ConnectTimeout=10` to ssh args. Now the failure surfaces
as a real ssh error message in the terminal within 10s, instead of
masquerading as a silently dead shell over the next ~100s. Doesn't
diagnose *why* sshd isn't responding (separate investigation), but
it does mean the user gets actionable feedback within seconds.
Behavior-based regression test asserts `-o ConnectTimeout=N` is in the
ssh argv — pins presence, not the literal value, so operators can tune
without breaking the gate. Verified to FAIL on pre-fix code (matched
the literal arg pair) and PASS on fix.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds a notification seam to the universal molecule-mcp wheel so push-
notification-capable MCP hosts (Claude Code today; any compliant
client tomorrow) get inbound A2A messages as conversation interrupts
instead of having to poll wait_for_message / inbox_peek.
Wire-up:
- inbox.py: module-level _NOTIFICATION_CALLBACK + set_notification_callback()
Fires from InboxState.record() AFTER lock release, with same dict
shape inbox_peek returns. Best-effort — a raising callback never
prevents the message from landing in the queue.
- a2a_mcp_server.py: _build_channel_notification() pure helper +
bridge wiring in main() that schedules notifications via
asyncio.run_coroutine_threadsafe (poller is a daemon thread, MCP
loop is asyncio).
- Method name 'notifications/claude/channel' matches the contract
documented in molecule-mcp-claude-channel/server.ts:509.
- wheel_smoke.py: pin set_notification_callback as a published name,
same regression class as the 0.1.16 main_sync incident.
Pollers (wait_for_message / inbox_peek) keep working unchanged for
runtimes without notification support.
Tests: 6 new in test_inbox.py (callback fires once on record, dedupe
short-circuits before fire, raising cb doesn't break inbox, set/clear
semantics), 5 new in test_a2a_mcp_server.py (method name pin, content
mapping, meta routing, no-id JSON-RPC notification spec, missing-
field tolerance). All 59 combined tests pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Pre-fix Restart called provisioner.Stop / cpProv.Stop synchronously before
returning the HTTP response. CPProvisioner.Stop is DELETE /cp/workspaces/:id
→ CP → AWS EC2 terminate, which can exceed the canvas's 15s HTTP timeout,
especially right after a platform-wide redeploy when every tenant queues a
CP request at once. The user sees a misleading "signal timed out" red banner
on Save & Restart even though the async re-provision goroutine continues
and the workspace ends up online.
Caught 2026-04-30 on hongmingwang hermes workspace 32993ee7-…cb9d75d112a5
right after the heartbeat-fix platform redeploy at 02:11Z. The workspace
came back online correctly; only the canvas response timed out.
Fix moves Stop into the same goroutine as provisionWorkspaceCP /
provisionWorkspaceOpts. The handler now responds in <500ms (DB lookup +
status UPDATE only). Stop and provision keep their existing ordering
inside the goroutine. Uses context.Background() to detach from the request
lifecycle so an aborted client connection doesn't cancel the in-flight
Stop/provision pair.
Pinned by a behavior-based AST gate (workspace_restart_async_test.go):
the test parses workspace_restart.go and walks the Restart function body,
flagging any <recv>.{provisioner,cpProv}.Stop call that isn't nested in a
*ast.FuncLit. Same family as callsProvisionStart in
workspace_provision_shared_test.go. Verified the gate fails on the
pre-fix shape (flags lines 151 and 153 — the original sync Stop calls).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
External molecule-mcp runtimes register with hardcoded agent_card.name
= molecule-mcp-{id[:8]} and skills=[]. That made every external
workspace look identical on the canvas and gave peer agents calling
list_peers no signal beyond name — they had to guess capabilities.
Three new env vars let the operator declare identity + capabilities
without code changes:
* MOLECULE_AGENT_NAME — display name on canvas (default unchanged)
* MOLECULE_AGENT_DESCRIPTION — one-line description (default empty)
* MOLECULE_AGENT_SKILLS — comma-separated skill names
Comma-separated skills get expanded to {"name": "..."} objects — the
minimum shape that satisfies both shared_runtime.summarize_peers
(reads s["name"]) AND canvas SkillsTab.tsx (id falls back to name).
Strict-superset behaviour: when no env vars are set, agent_card
matches the previous hardcoded value exactly. No regression for
operators who haven't migrated.
Why this matters end-to-end:
* Canvas Skills tab now shows each declared skill as a chip
* Peer agents calling list_peers see {name, skills} per peer and
can route delegations to the right specialist
* Same applies to the canvas Details tab + workspace card hover
Tests cover: defaults match prior behaviour; name override; CSV →
skill objects; whitespace stripping + empty entries dropped;
description omitted when unset (keeps wire payload minimal);
whitespace-only name falls back to default; end-to-end through
_platform_register's payload.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The Model dropdown's onChange writes to config.runtime_config.model
whenever a runtime is set (hermes, claude-code, etc.), and only
falls back to top-level config.model when no runtime is selected.
But handleSave used to diff the new value against top-level
nextSource.model only — so for any runtime-bearing workspace, the
PUT /workspaces/:id/model never fired and MODEL_PROVIDER never
landed in workspace_secrets.
Symptom (2026-04-30, hongmingwang Hermes Agent
32993ee7-840e-4c02-8ca8-cb9d75d112a5):
- User picks minimax/MiniMax-M2.7-highspeed from the dropdown
- Hits Save & Restart
- Save reports success; restart fires
- The new EC2 boots with HERMES_DEFAULT_MODEL empty
- install.sh defaults to nousresearch/hermes-4-70b
- hermes-agent errors "No LLM provider configured" on every chat
turn because no NOUS_API_KEY / OPENROUTER_API_KEY is set
- Reload Config tab → model field reverts to whatever
GET /workspaces/:id/model returns (i.e. empty / template default)
handleSave now reads the effective model from runtime_config.model
first and falls back to top-level model for legacy no-runtime
workspaces. Same change for the old-value diff so a no-op Save
still skips the PUT.
Tests pin both branches: PUTs /model when the dropdown changed
runtime_config.model on a hermes workspace; does NOT PUT when
the value is unchanged from what GET /model returned.