Commit Graph

490 Commits

Author SHA1 Message Date
Hongming Wang
517bd0efc5 feat(canvas+workspace-server): data-driven Provider dropdown (#199)
Option B PR-5. Canvas Config tab now exposes a Provider override input
that's adapter-driven from each runtime's template — no hardcoded
provider list in the canvas. PUT /workspaces/:id/provider on Save
when dirty; auto-restart suppression to avoid double-restart with
the model handler's own restart.

The dropdown's suggestion list comes from /templates →
runtime_config.providers (the field added in
molecule-ai-workspace-template-hermes PR #31). For templates that
haven't migrated to the explicit providers list yet, suggestions
derive from model[].id slug prefixes — still adapter-driven, just
inferred. This keeps existing templates working while platform team
migrates them one at a time.

workspace-server changes:
- Add Providers []string field to templateSummary JSON
- Parse runtime_config.providers in /templates handler
- 2 new tests pin the surfacing + omitempty behavior

canvas changes:
- Remove hardcoded PROVIDER_SUGGESTIONS constant
- Add provider/originalProvider state + PUT-on-save logic
- Add deriveProvidersFromModels() fallback helper
- Wire RuntimeOption.providers from /templates response
- 8 new tests pin the behavior end-to-end

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 11:19:17 -07:00
Hongming Wang
1a1285171c
Merge pull request #2453 from Molecule-AI/feat/workspace-server-provider-endpoint
feat(workspace-server): PUT /provider endpoint (#196 — Option B PR-2)
2026-05-01 05:37:15 +00:00
Hongming Wang
258c6bea44 feat(workspace-server): PUT /provider endpoint for explicit LLM provider (#196)
Mirror of PUT /model. Stores the provider slug as the LLM_PROVIDER
workspace secret so the canvas can update model + provider
independently — a user might keep the same model alias and switch
providers (route through a different gateway), or vice versa.
Forcing both into one endpoint imposes a single Save+Restart per
change; two endpoints let canvas update each as the user picks.

Plumbs through the existing chain: secret-load → envVars → CP
req.Env → user-data env exports → /configs/config.yaml (after
controlplane PR #364 lands the heredoc append).

Tests: 5 new cases mirroring SetModel/GetModel exactly — default
empty response, DB error, upsert with restart trigger, empty-clears,
invalid-UUID rejection.

Part of: Option B PR-2 (#196) — workspace-server plumbs LLM_PROVIDER
Stack:   PR-1 schema (#2441 merged)
         PR-2 (this)  ws-server endpoint
         PR-3 (#364 open) CP user-data persistence
         PR-4 (pending) hermes adapter consume
         PR-5 (pending) canvas Provider dropdown
2026-04-30 22:25:48 -07:00
Hongming Wang
364c70fc71 fix(workspace-server): emit null removed_at when timestamp fetch fails
#2429 review finding. The 410-Gone path issues a follow-up
`SELECT updated_at` after detecting status='removed'. If that query
fails (workspace row deleted between the two queries, transient DB
error, etc.), `removedAt` stays as Go's zero time and the JSON body
emits `"removed_at": "0001-01-01T00:00:00Z"` — a misleading timestamp
the client has to know to ignore.

Now we branch on `removedAt.IsZero()` and emit `null` for the failed
path. The actionable signal (the 410 + hint) is unchanged; only the
timestamp shape gets cleaner.

Pinned by `TestWorkspaceGet_RemovedReturns410WithNullRemovedAtOnTimestampFetchFailure`,
which simulates the row vanishing via `sqlmock`'s `WillReturnError(sql.ErrNoRows)`.
The original `_RemovedReturns410` test now also asserts that the
happy-path timestamp is a non-null value (was just checking the key
existed).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 22:24:59 -07:00
Hongming Wang
72f0079c10 feat(workspace-server): GET /workspaces/:id returns 410 Gone when status='removed' (#2429)
Defense-in-depth at the endpoint level. Previously, GET /workspaces/:id
returned 200 OK with `status:"removed"` in the body for deleted
workspaces — silent-fail UX hit on the hongmingwang tenant 2026-04-30:
the channel bridge / molecule-mcp wheel had a dead workspace_id + token
in .env, get_workspace_info returned 200 → caller assumed everything
was fine, then every subsequent /registry/* call 401d because tokens
were revoked, and operators had no idea their workspace was gone.

#2425 fixed the steady-state heartbeat path (escalate to ERROR after
3 consecutive 401s). This change is the startup-time defense — fail
loud when the operator first probes the workspace instead of waiting
for the heartbeat to sour.

The 410 body includes:
  {error: "workspace removed", id, removed_at, hint: "Regenerate ..."}

Audit-trail consumers that need the body shape of a removed workspace
(admin views, "show me deleted workspaces" tooling) opt into the
legacy 200 + body via ?include_removed=true. Without this opt-in path
the audit trail becomes invisible at the API layer.

Two new tests pinned:
  - TestWorkspaceGet_RemovedReturns410
  - TestWorkspaceGet_RemovedWithIncludeQueryReturns200

Follow-ups in separate PRs:
  - Update workspace/a2a_client.py get_workspace_info to surface
    "removed" specifically rather than collapsing into "not found"
  - Update channel bridge getWorkspaceInfo (server.ts) to detect 410
    → log clear "workspace was deleted, re-onboard" error
  - Audit canvas/* + admin tooling consumers that may rely on the
    legacy 200 + status:"removed" shape; switch them to the
    ?include_removed=true opt-in if needed
  - Update docs (runtime-mcp.mdx Troubleshooting + external-agents.mdx
    lifecycle table)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 21:55:24 -07:00
Hongming Wang
b9311134cf fix(terminal-diagnose): KI-005 hierarchy check + race-free stderr capture
Two fixes from /code-review-and-quality on PR #2445:

1. **KI-005 hierarchy check parity with /terminal**

   HandleConnect runs the KI-005 cross-workspace guard before dispatch
   (terminal.go:85-106): when X-Workspace-ID is set and != :id, validate
   the bearer's workspace binding then call canCommunicateCheck. Without
   this, an org-level token holder in tenant Foo can probe any
   workspace's diagnostic state by guessing the UUID — same enumeration
   vector KI-005 closed for /terminal in #1609. Per-workspace bearer
   tokens are URL-bound by WorkspaceAuth, so the gap is org tokens
   within the same tenant.

   Fix: copy the same gate into HandleDiagnose, before the
   instance_id SELECT.

   Test: TestHandleDiagnose_KI005_RejectsCrossWorkspace stubs
   canCommunicateCheck=false and confirms 403 fires before the DB
   lookup (sqlmock's ExpectationsWereMet pins that we never reached
   the SELECT COALESCE). Mirrors the existing
   TestTerminalConnect_KI005_RejectsUnauthorizedCrossWorkspace.

2. **Race-free tunnel stderr capture (syncBuf)**

   strings.Builder isn't goroutine-safe. os/exec spawns a background
   goroutine that copies the subprocess's stderr fd to cmd.Stderr's
   Write, so reading the buffer's String() from the request goroutine
   on wait-for-port timeout while the tunnel may still be writing is
   a data race that `go test -race` flags. Worst-case impact in
   production is a garbled Detail string (not a crash), but the fix
   is small.

   Fix: wrap bytes.Buffer in a sync.Mutex (syncBuf type). Same
   io.Writer interface, no API changes elsewhere.

3. **Nit cleanup**

   - read-pubkey failure now reports as its own step name instead of
     a duplicated "ssh-keygen" entry — disambiguates two different
     failure modes that previously shared a name.
   - Replaced numToString hand-rolled int-to-string with strconv.Itoa
     in the test (no import savings reason existed).

Suite: 4 diagnose tests pass with -race; full handlers suite passes
in 3.95s. go vet clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 21:19:18 -07:00
Hongming Wang
d012a803e4 feat(terminal): add diagnose endpoint for SSH probe stages
GET /workspaces/:id/terminal/diagnose runs the same per-stage pipeline as
/terminal (ssh-keygen → EIC send-key → tunnel → ssh) but non-interactively
and returns JSON. Each stage reports {name, ok, duration_ms, error,
detail}, plus a top-level first_failure naming the broken stage.

Why: when the canvas terminal silently disconnects ("Session ended" with
no error frame — the user-reported failure mode on hongmingwang's hermes
workspace), there is no remote-readable signal of WHICH stage failed.
The ssh client's stderr lives only in the workspace-server's stdout on
the tenant CP EC2 — invisible without shell access. /terminal can't
expose stderr cleanly because it has already upgraded to WebSocket
binary frames by the time ssh runs. /terminal/diagnose stays pure
HTTP/JSON, so the same auth (WorkspaceAuth + ADMIN_TOKEN fallback) gives
operators a one-call probe that splits "IAM broke" (send-ssh-public-key
fails) from "tunnel/SG broke" (wait-for-port fails) from "sshd auth
broke" (ssh-probe gets Permission denied) from "shell broke" (probe
exits non-zero with stderr).

Stages mirrored from handleRemoteConnect in terminal.go:

  1. ssh-keygen          ephemeral session keypair
  2. send-ssh-public-key AWS EIC API push, IAM-gated
  3. pick-free-port      local port for the tunnel
  4. open-tunnel         aws ec2-instance-connect open-tunnel start
  5. wait-for-port       the tunnel actually listens (folds tunnel
                         stderr into Detail when it doesn't)
  6. ssh-probe           non-interactive `ssh ... 'echo MARKER'` that
                         confirms auth + bash + the marker round-trip
                         (CombinedOutput captures stderr verbatim —
                         this is the whole reason the endpoint exists)

Local Docker workspaces (no instance_id) get a smaller probe:
container-found + container-running. Same response shape so callers
don't need to branch.

Tests stub sendSSHPublicKey / openTunnelCmd / sshProbeCmd via the
existing package-level vars (same pattern as TestSSHCommandCmd_*) so
the test suite stays hermetic — no AWS, no network. The three new
tests pin: (a) routing to remote on instance_id present,
(b) routing to local on empty instance_id, (c) the operationally
critical case — full success through wait-for-port then a probe
failure surfaces ssh stderr in the ssh-probe step's Error/Detail
with first_failure="ssh-probe".

Auth: rides on existing WorkspaceAuth middleware. Operators with the
tenant ADMIN_TOKEN (fetched via /cp/admin/orgs/:slug/admin-token) can
probe any workspace without per-workspace token; same admin path as
the canvas dashboard reads workspace activity.

Response always returns HTTP 200 (success or step failure are both in
the JSON body) so callers don't need to branch on status code — the
endpoint either reports a first_failure or doesn't.

Resolves task #200, supports task #193 (workspace EC2 sshd
unresponsive — without this endpoint we couldn't pin the failure
stage from outside the tenant CP EC2).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 21:10:20 -07:00
Hongming Wang
cda93e3c52 test(terminal): update exact-argv snapshot to include ConnectTimeout
The pre-existing TestSSHCommandCmd_BuildsArgv asserts the literal argv
slice. Adding `-o ConnectTimeout=10` shifted the slice — this commit
tracks the snapshot to match. The new behavior-based
TestSSHCommandCmd_ConnectTimeoutPresent (added in the prior commit)
keeps the invariant pinned without depending on argv ordering, so
future tweaks land in only one place even if more options are added.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 20:23:48 -07:00
Hongming Wang
f30b3d4476 fix(terminal): cap ssh handshake at 10s so hung sshd surfaces fast
When the workspace EC2's sshd is unresponsive (mid-restart, SG drop,
AMI without ec2-instance-connect), the canvas's xterm shows the user's
typed bytes echoed back by the workspace-server's *local* PTY (cooked +
echo mode before ssh sets it raw post-handshake) and then closes
silently when Cloudflare's idle WebSocket timer fires (~100s) — with no
"Connection refused" or "Permission denied" output ever reaching the
user. This is what hongmingwang's hermes terminal looked like 2026-04-30
right after the heartbeat-fix redeploy: status="online" but the shell
appeared dead.

Caught reproducibly by holding a fresh /workspaces/<id>/terminal
WebSocket open for 60s — server sent zero frames except the local-PTY
echo of one keystroke typed at t=8s. ssh was hung at handshake; bash
never saw the byte.

Fix: add `-o ConnectTimeout=10` to ssh args. Now the failure surfaces
as a real ssh error message in the terminal within 10s, instead of
masquerading as a silently dead shell over the next ~100s. Doesn't
diagnose *why* sshd isn't responding (separate investigation), but
it does mean the user gets actionable feedback within seconds.

Behavior-based regression test asserts `-o ConnectTimeout=N` is in the
ssh argv — pins presence, not the literal value, so operators can tune
without breaking the gate. Verified to FAIL on pre-fix code (matched
the literal arg pair) and PASS on fix.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 20:16:41 -07:00
Hongming Wang
f6ddcf66ab Move /restart Stop into the async goroutine
Pre-fix Restart called provisioner.Stop / cpProv.Stop synchronously before
returning the HTTP response. CPProvisioner.Stop is DELETE /cp/workspaces/:id
→ CP → AWS EC2 terminate, which can exceed the canvas's 15s HTTP timeout,
especially right after a platform-wide redeploy when every tenant queues a
CP request at once. The user sees a misleading "signal timed out" red banner
on Save & Restart even though the async re-provision goroutine continues
and the workspace ends up online.

Caught 2026-04-30 on hongmingwang hermes workspace 32993ee7-…cb9d75d112a5
right after the heartbeat-fix platform redeploy at 02:11Z. The workspace
came back online correctly; only the canvas response timed out.

Fix moves Stop into the same goroutine as provisionWorkspaceCP /
provisionWorkspaceOpts. The handler now responds in <500ms (DB lookup +
status UPDATE only). Stop and provision keep their existing ordering
inside the goroutine. Uses context.Background() to detach from the request
lifecycle so an aborted client connection doesn't cancel the in-flight
Stop/provision pair.

Pinned by a behavior-based AST gate (workspace_restart_async_test.go):
the test parses workspace_restart.go and walks the Restart function body,
flagging any <recv>.{provisioner,cpProv}.Stop call that isn't nested in a
*ast.FuncLit. Same family as callsProvisionStart in
workspace_provision_shared_test.go. Verified the gate fails on the
pre-fix shape (flags lines 151 and 153 — the original sync Stop calls).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 19:35:29 -07:00
Hongming Wang
a5c5139e3a fix(workspace): deliver platform_inbound_secret on every heartbeat
Heartbeat now echoes the workspace's platform_inbound_secret on every
beat (mirroring /registry/register), and the molecule-mcp client
persists it to /configs/.platform_inbound_secret on receipt.

Symptom (2026-04-30, hongmingwang tenant): chat upload returned 503
"workspace will pick it up on its next heartbeat" and then 401 on
retry — permanent until workspace restart. The 503 message was a lie:
heartbeat used to discard the platform_inbound_secret entirely; only
register delivered it, and register fires once at startup.

Server (Go):
  - Heartbeat handler reuses readOrLazyHealInboundSecret (the same
    helper chat_files + register use), so heartbeat-time recovery
    covers the rotate / mid-life NULL-column case the existing
    register-time heal can't reach.
  - Failure is non-fatal: liveness contract trumps secret delivery,
    chat_files retries lazy-heal on its own next request.

Client (Python):
  - _persist_inbound_secret_from_heartbeat parses the heartbeat 200
    response and persists via platform_inbound_auth.save_inbound_secret.
  - All exceptions swallowed — heartbeat liveness > secret persistence;
    next tick (≤20s) retries.

Tests:
  - Server: pin secret-present, lazy-heal-mint-on-NULL, and heal-
    failure-omits-field branches.
  - Client: pin persist-on-200, skip-on-empty, skip-on-non-dict-body,
    skip-on-401, swallow-save-OSError.
2026-04-30 17:36:33 -07:00
Hongming Wang
876c0bfcd4 docs(canvas): update Universal MCP snippet — molecule-mcp now standalone
The canvas tab snippet for the Universal MCP path was written before
this PR added the built-in register + heartbeat thread. Earlier wording
described it as "outbound-only — pair with the Claude Code or Python SDK
tab for heartbeat + inbound messages" — that's stale. molecule-mcp now
handles register + heartbeat itself; the only thing it doesn't yet do is
inbound A2A delivery.

Updated:
- externalUniversalMcpTemplate header comment + body — describes
  standalone behavior, points operators at SDK/channel only when they
  need INBOUND (not heartbeat).
- Drops the now-redundant curl-register step from the snippet — the
  binary registers itself on startup.
- Canvas modal label likewise updated.

No runtime / behavior change; pure docs polish so a copy-pasting
operator's mental model matches what the binary actually does.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 15:52:15 -07:00
Hongming Wang
427300f3a4 feat: make molecule-mcp standalone (built-in register + heartbeat) + recover awaiting_agent on heartbeat
Two paired fixes that together let an external operator run a single
process (molecule-mcp) and see their workspace come up online in the
canvas — the bug surfaced live when status stuck at "awaiting_agent /
OFFLINE" despite an active MCP server.

Platform side (workspace-server/internal/handlers/registry.go):
  Heartbeat handler already auto-recovers offline → online and
  provisioning → online, but NOT awaiting_agent → online. Healthsweep
  flips stale-heartbeat external workspaces TO awaiting_agent, and
  with no recovery path the workspace stays "OFFLINE — Restart" in the
  canvas forever. Add the symmetric branch: if currentStatus ==
  "awaiting_agent" and a heartbeat arrives, flip to online + broadcast
  WORKSPACE_ONLINE. Mirrors the existing offline/provisioning patterns
  exactly. Test: TestHeartbeatHandler_AwaitingAgentToOnline asserts
  the SQL UPDATE fires with the awaiting_agent guard clause.

Wheel side (workspace/mcp_cli.py):
  molecule-mcp was outbound-only — operators had to run a separate
  SDK process to register + heartbeat. Now mcp_cli.main():
    1. Calls /registry/register at startup (idempotent upsert flips
       status awaiting_agent → online via the existing register path).
    2. Spawns a daemon thread that POSTs /registry/heartbeat every
       20s. 20s is comfortably under the healthsweep stale window so
       a single missed beat doesn't cause status churn.
    3. Runs the MCP stdio loop in the foreground.

  Both calls set Origin: ${PLATFORM_URL} so the SaaS edge WAF accepts
  them. Threaded heartbeat (not asyncio) chosen because it doesn't
  need to share an event loop with the MCP stdio server — daemon=True
  cleanly dies when the operator's runtime exits.

  MOLECULE_MCP_DISABLE_HEARTBEAT=1 escape hatch lets in-container
  callers (which have heartbeat.py running already) reuse the entry
  point without double-heartbeating. Default is enabled.

End-to-end verification (live, against
hongmingwang.moleculesai.app, workspace 8dad3e29-...):
  pre-fix:  status=awaiting_agent → canvas shows OFFLINE forever
  post-fix: ran `molecule-mcp` for 5s standalone → canvas state:
            status=online runtime=external agent=molecule-mcp-8dad3e29

Test coverage: 7 new mcp_cli tests (register-at-startup, heartbeat-
thread-spawned, disable-env-skips-both, env-and-file token resolution,
register payload shape, heartbeat endpoint + headers); 1 new platform
test (awaiting_agent → online recovery). Full workspace + handlers
suites green: 1355 Python, full Go handlers passing.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 15:42:44 -07:00
Hongming Wang
716589742c feat(canvas): add Universal MCP tab to external-agent connect modal
The "Connect your external agent" dialog already covered Claude Code,
Python SDK, curl, and raw fields. This adds a Universal MCP tab that
documents the new \`molecule-mcp\` console script — the runtime-
agnostic baseline shipped by PR #2413's workspace-runtime changes.

Surface area:
- New \`externalUniversalMcpTemplate\` constant in workspace-server.
  Three-step snippet: pip install runtime → one-shot register via curl
  → wire molecule-mcp into agent's MCP config (Claude Code example,
  notes that hermes/codex/etc. take the same env-var contract).
- Workspace create response now includes \`universal_mcp_snippet\`
  alongside the existing curl/python/channel snippets.
- Canvas modal renders the tab when \`universal_mcp_snippet\` is
  present; backward-compatible with older platform builds (tab hides
  when empty).

Origin/WAF coverage (the user explicitly asked for this):
- The runtime wheel handles Origin automatically (this PR's earlier
  commit on platform_auth.auth_headers).
- The curl tab now sets \`Origin: {{PLATFORM_URL}}\` preemptively
  with an explanatory comment; \`/registry/register\` is currently
  WAF-allowed without it but adding now keeps the snippet working
  if WAF rules expand. The comment also explains why
  \`/workspaces/*\` paths return empty 404 without Origin — the
  exact failure mode I hit while smoke-testing this PR live.
- The MCP snippet's footer notes that the wheel auto-handles
  Origin so operators don't think about it.

End-to-end verification (against live tenant
hongmingwang.moleculesai.app, freshly registered workspace):
- get_workspace_info → full JSON
- list_peers → "Claude Code Agent (ID: 97ac32e9..., status: online)"
- recall_memory → "No memories found."
all returned by the molecule-mcp binary speaking MCP stdio to
this Claude Code session.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 15:34:27 -07:00
Hongming Wang
36e263a07d fix(workspace-server): skip provision pipeline on Restart for runtime=external
POST /workspaces/:id/restart on a runtime=external workspace ran the full
re-provision pipeline (Stop → provisionWorkspace*), which calls
issueAndInjectToken → RevokeAllForWorkspace. For external workspaces
(operator-driven, no container/EC2) that silently destroyed the operator's
local bearer token on every "Restart" click in the canvas — the local
poller would then 401-spam against /activity until the operator manually
regenerated from the Tokens tab.

The auto-restart path (runRestartCycle, line 436) already short-circuits
runtime=external. This patch mirrors that for the manual handler so the
two paths agree, and surfaces a 200 OK with a clear message so the
canvas can tell the operator the fix is on their side rather than
silently no-op'ing.

Test coverage: TestRestartHandler_ExternalRuntimeNoOps asserts the
short-circuit fires *before* any DB write or provision call. sqlmock's
"unexpected query" failure mode would catch a regression that
re-introduced the token revoke or the status=provisioning UPDATE.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 15:08:48 -07:00
Hongming Wang
b5c7b349d8
Merge pull request #2408 from Molecule-AI/auto/fix-healthsweep-gate-saas
fix(boot): always start health-sweep goroutine — SaaS tenants need it for external-runtime liveness
2026-04-30 20:12:28 +00:00
Hongming Wang
8516a8f9c6 fix(tenant-guard): allowlist /buildinfo so redeploy verifier can reach it
The /buildinfo route added in #2398 to verify each tenant runs the
published SHA was 404'd by TenantGuard on every production tenant —
the allowlist had /health, /metrics, /registry/register,
/registry/heartbeat, but not /buildinfo. The redeploy workflows
curl /buildinfo from a CI runner with no X-Molecule-Org-Id header,
TenantGuard 404'd them, gin's NoRoute proxied to canvas, canvas
returned its HTML 404 page, jq read empty git_sha, and the verifier
silently soft-warned every tenant as "unreachable" — which the
workflow doesn't fail on.

Confirmed externally:
  curl https://hongmingwang.moleculesai.app/buildinfo
  → HTTP 404 + Content-Type: text/html (Next.js "404: This page
    could not be found.") even though /health on the same host
    returns {"status":"ok"} from gin.

The buildinfo package's own doc already declares /buildinfo public
by design ("Public is intentional: it's a build identifier, not
operational state. The same string is already published as
org.opencontainers.image.revision on the container image, so no new
info is exposed.") — the allowlist just missed it.

Pin the alignment in tenant_guard_test.go:
TestTenantGuard_AllowlistBypassesCheck now asserts /buildinfo
returns 200 without an org header alongside /health and /metrics,
so a future allowlist edit can't silently regress the verifier
again.

Closes the silent-success failure mode: stale tenants will now
show up as STALE (hard-fail) rather than UNREACHABLE (soft-warn).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 12:54:51 -07:00
Hongming Wang
235aca9908 fix(boot): always start health-sweep goroutine — SaaS tenants need it for external-runtime liveness
Pre-fix, cmd/server/main.go gated the entire health-sweep goroutine on
`prov != nil`. On SaaS tenants (`MOLECULE_ORG_ID` set) the local Docker
provisioner is never initialized — only `cpProv`. So the goroutine
never started, and `sweepStaleRemoteWorkspaces` (which transitions
runtime='external' workspaces from 'online' to 'awaiting_agent' when
their last_heartbeat_at goes stale) never ran.

Net effect on production: every external-runtime workspace on SaaS
that lost its agent stayed 'online' indefinitely instead of falling
back to 'awaiting_agent' (re-registrable). The drift gate (#2388)
caught the migration side and #2382 fixed the SQL writes, but this
orchestration-side gate slipped through both because there was no
SaaS-mode E2E coverage on the heartbeat-loss → awaiting_agent
transition.

Caught by #2392 (live staging external-runtime regression E2E)
failing at step 6 — 180s with no heartbeat, expected
status=awaiting_agent, got online.

Fix: drop the `if prov != nil` gate. `StartHealthSweep` already
handles nil checker correctly (healthsweep.go:50-71): the Docker
sweep is gated inside the loop, the remote sweep always runs. Test
coverage already exists at TestStartHealthSweep_NilCheckerRunsRemoteSweep.

After this lands and tenants redeploy, #2392 step 6 passes and the
regression coverage closes.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 12:05:40 -07:00
Hongming Wang
c06e2fec5e
Merge pull request #2396 from Molecule-AI/auto/typed-workspace-status
refactor(workspace-status): typed constants + AST-based drift gate
2026-04-30 18:03:30 +00:00
Hongming Wang
998e13c4bd feat(deploy): verify each tenant /buildinfo matches published SHA after redeploy
Closes the gap that let issue #2395 ship: redeploy-fleet workflows reported
ssm_status=Success based on SSM RPC return code alone, while EC2 tenants
silently kept serving the previous :latest digest because docker compose up
without an explicit pull is a no-op when the local tag already exists.

Wire:
  - new buildinfo package exposes GitSHA, set at link time via -ldflags from
    the GIT_SHA build-arg (default "dev" so test runs without ldflags fail
    closed against an unset deploy)
  - router exposes GET /buildinfo returning {git_sha} — public, no auth,
    cheap enough to curl from CI for every tenant
  - both Dockerfiles thread GIT_SHA into the Go build
  - publish-workspace-server-image.yml passes GIT_SHA=github.sha for both
    images
  - redeploy-tenants-on-main.yml + redeploy-tenants-on-staging.yml curl each
    tenant's /buildinfo after the redeploy SSM RPC and fail the workflow on
    digest mismatch; staging treats both :latest and :staging-latest as
    moving tags; verification is skipped only when an operator pinned a
    specific tag via workflow_dispatch

Tests:
  - TestGitSHA_DefaultDevSentinel pins the dev default
  - TestBuildInfoEndpoint_ReturnsGitSHA pins the wire shape that the
    workflow's jq lookup depends on

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 10:55:08 -07:00
Hongming Wang
188db33794 refactor(workspace-status): catch missed literal in workspace_bootstrap.go + add literal-drift gate
Two related fixes after self-review of #2396:

1. workspace_bootstrap.go:62 — `SET status = 'failed'` was missed in the
   initial sweep. Now parameterized as $3 with models.StatusFailed.
   Test fixed with the additional WithArgs sentinel.

2. Drift gate now scans production .go AST for hard-coded
   `UPDATE workspaces … SET status = '<literal>'` and fails with
   file:line. This catches the kind of miss the first commit just
   fixed — the original migration-vs-codebase axis only verified
   AllWorkspaceStatuses ⊆ enum, not "no raw literals in writes."

Verified the gate fires: dropped a synthetic 'failed' literal into
internal/handlers/_drift_sanity.go and confirmed the gate flagged
"internal/handlers/_drift_sanity.go:6 → SET status = 'failed'".

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 10:51:01 -07:00
Hongming Wang
fdf1b5d76a refactor(workspace-status): typed constants + AST-based drift gate
Eliminate raw 'awaiting_agent'/'hibernating'/'failed'/etc string literals
from production status writes. Adds models.WorkspaceStatus typed alias and
models.AllWorkspaceStatuses canonical slice; every UPDATE workspaces SET
status = ... now passes a parameterized $N typed value rather than a
hard-coded SQL literal.

Defense-in-depth follow-up to migration 046 (#2388): the Postgres enum
type was missing 'awaiting_agent' + 'hibernating' for ~5 days because
sqlmock regex matching cannot enforce live enum constraints. The drift
gate is now a proper Go AST + SQL parser (no regex), asserting the
codebase ⊆ migration enum and every const appears in the canonical
slice. With status as a parameterized typed value, future enum mismatches
fail at the SQL layer in tests, not silently in prod.

Test coverage: full suite passes with -race; drift gate green.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 10:41:41 -07:00
Hongming Wang
e081c8335f refactor(handlers): widen WorkspaceHandler.provisioner to LocalProvisionerAPI interface (#2369)
Symmetric with the existing CPProvisionerAPI interface. Closes the
asymmetry where the SaaS provisioner field was an interface (mockable
in tests) but the Docker provisioner field was a concrete pointer
(not).

## Changes

- New ``provisioner.LocalProvisionerAPI`` interface — the 7 methods
  WorkspaceHandler / TeamHandler call on h.provisioner today: Start,
  Stop, IsRunning, ExecRead, RemoveVolume, VolumeHasFile,
  WriteAuthTokenToVolume. Compile-time assertion confirms *Provisioner
  satisfies it. Mirror of cp_provisioner.go's CPProvisionerAPI block.
- ``WorkspaceHandler.provisioner`` and ``TeamHandler.provisioner``
  re-typed from ``*provisioner.Provisioner`` to
  ``provisioner.LocalProvisionerAPI``. Constructor parameter type is
  unchanged — the assignment widens to the interface, so the 200+
  callers of ``NewWorkspaceHandler`` / ``NewTeamHandler`` are
  unaffected.
- Constructors gain a ``if p != nil`` guard before assigning to the
  interface field. Without this, ``NewWorkspaceHandler(..., nil, ...)``
  (the test fixture pattern across 200+ tests) yields a typed-nil
  interface value where ``h.provisioner != nil`` evaluates *true*,
  and the SaaS-vs-Docker fork incorrectly routes nil-fixture tests
  into the Docker code path. Documented inline with reference to
  the Go FAQ.
- Hardened the 5 Provisioner methods that lacked nil-receiver guards
  (Start, ExecRead, WriteAuthTokenToVolume, RemoveVolume,
  VolumeHasFile) — return ErrNoBackend on nil receiver instead of
  panicking on p.cli dereference. Symmetric with Stop/IsRunning
  (already hardened in #1813). Defensive cleanup so a future caller
  that bypasses the constructor's nil-elision still degrades
  cleanly.
- Extended TestZeroValuedBackends_NoPanic with 5 new sub-tests
  covering the newly-hardened nil-receiver paths. Defense-in-depth:
  a future refactor that drops one of the nil-checks fails red here
  before reaching production.

## Why now

- Provisioner orchestration has been touched in #2366 / #2368 — the
  interface symmetry is the natural follow-up captured in #2369.
- Future work (CP fleet redeploy endpoint, multi-backend
  provisioners) wants this in place. Memory note
  ``project_provisioner_abstraction.md`` calls out pluggable
  backends as a north-star.
- Memory note ``feedback_long_term_robust_automated.md`` —
  compile-time gates + ErrNoBackend symmetry > runtime panics.

## Verification

- ``go build ./...`` clean.
- ``go test ./...`` clean — 1300+ tests pass, including the
  previously-flaky Create-with-nil-provisioner paths that now
  exercise the constructor's nil-elision correctly.
- ``go test ./internal/provisioner/ -run TestZeroValuedBackends_NoPanic
  -v`` — all 11 nil-receiver subtests green (was 6, +5 for the
  newly-hardened methods).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 09:18:16 -07:00
Hongming Wang
c6cb82e1c0 fix(workspaces): add missing 'awaiting_agent' + 'hibernating' to workspace_status enum
Migration 043 (2026-04-25) introduced the workspace_status enum but
omitted two values application code had been writing for days, so every
UPDATE that tried to write either value failed silently in production:

  'awaiting_agent' (since 2026-04-24, commit 1e8b5e01):
    - handlers/workspace.go:333 — external workspace pre-register
    - handlers/registry.go (via PR #2382) — liveness offline transition
    - registry/healthsweep.go (via PR #2382) — heartbeat-staleness sweep

  'hibernating' (since hibernation feature shipped):
    - handlers/workspace_restart.go:271 — DB-level claim before stop

All four/five sites swallowed the enum-cast error. User-visible impact:
external workspaces never transition to a stale state when their agent
disconnects (canvas shows them stuck on 'online'/'degraded' indefinitely),
new external workspaces never advance past 'provisioning', and idle
workspaces never auto-hibernate (resources held forever).

PR #2382 didn't *cause* this — it inherited the gap and added two more
silent-fail paths on top. The pre-existing two had been broken for five
days and went unnoticed because:

  1. sqlmock matches SQL by regex, not against the live enum constraint.
     Every test passed despite the prod-only failure.
  2. The handlers either drop the Exec error entirely (workspace.go:333)
     or log+continue without an alert (the other three).

Fix in three pieces:

  1. migrations/046_*.up.sql — ALTER TYPE workspace_status ADD VALUE
     'awaiting_agent', 'hibernating'. IF NOT EXISTS makes it idempotent
     across re-runs (RunMigrations re-applies until schema_migrations
     records the file). ALTER TYPE ADD VALUE doesn't take a heavy lock
     and commits immediately, safe under live traffic.

  2. migrations/046_*.down.sql — full rename → recreate → cast → drop
     recipe. Postgres has no DROP VALUE so this is the only honest
     rollback. Pre-flights existing rows to compatible values
     (awaiting_agent → offline, hibernating → hibernated) before the
     type swap.

  3. internal/db/workspace_status_enum_drift_test.go — static gate that
     parses every UPDATE/INSERT against `workspaces` in workspace-server/
     internal/, extracts every status literal, and asserts each is in
     the enum union (CREATE TYPE + every ALTER TYPE ADD VALUE). The gate
     runs in unit tests, no DB required, and would have caught both
     omissions on the day they shipped. Pattern matches
     feedback_behavior_based_ast_gates and feedback_mock_at_drifting_layer.

Verification:
  - go test ./internal/db/ -count=1 -race  ✓
  - go vet ./...                           ✓
  - Drift gate flips red if I delete either ADD VALUE from the migration
    (validated via local mutation).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 08:52:05 -07:00
Hongming Wang
830e4aa548 refactor(chat_files): extract streamWorkspaceResponse helper for Upload+Download
The "do request → check err → defer close → forward headers → set
status → io.Copy → log mid-stream errors" tail was duplicated between
Upload and Download. Each handler had ~12 lines that differed only in:

  - the op label in log messages ("upload" vs "download")
  - the set of response headers to forward verbatim
    (Upload: Content-Type only; Download: Content-Type +
    Content-Length + Content-Disposition)

Hoist into ChatFilesHandler.streamWorkspaceResponse(c, op,
workspaceID, forwardURL, req, forwardHeaders). Each call site
reduces to one line. Future changes — request-id forwarding,
observability metric, response-size cap, bytes-streamed log —
go in ONE place rather than two.

Same drift-prevention rationale as resolveWorkspaceForwardCreds
(#2372) and readOrLazyHealInboundSecret (#2376), applied to the
response-streaming layer of the same handlers.

Behavior preserved: existing TestChatUpload_* and TestChatDownload_*
integration tests (8 across both handlers) all pass unchanged. The
log message format is consistent across both handlers now (single
"chat_files {op}: ..." string template) — operators can grep one
prefix for both features instead of separate prefixes per handler.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 08:27:45 -07:00
Hongming Wang
284511f02e feat(external): default external runtime to poll-mode + awaiting_agent
Paired molecule-core change for the molecule-cli `molecule connect`
RFC (https://github.com/Molecule-AI/molecule-cli/issues/10).

After this PR an `external`-runtime workspace's full lifecycle
matches the operator-driven model: it boots in awaiting_agent, the
CLI connects in poll mode without operator-side flag tuning, the
heartbeat-loss path lands back on awaiting_agent (re-registrable)
instead of the terminal-feeling 'offline'.

Two changes in workspace-server:

1) `resolveDeliveryMode` (registry.go) now reads `runtime` alongside
   `delivery_mode`. Resolution order:
     a. payload.delivery_mode if non-empty (operator override)
     b. row's existing delivery_mode if non-empty (preserves prior
        registration)
     c. **NEW:** "poll" if row.runtime = "external" — external
        operators run on laptops without public HTTPS; push-mode
        would hard-fail at validateAgentURL anyway. (`molecule connect`
        registers without --mode and expects this default.)
     d. "push" otherwise (historical default for platform-managed
        runtimes — langgraph, hermes, claude-code, etc.)

2) Heartbeat-loss for external workspaces lands them in
   `awaiting_agent` instead of `offline`. Two code paths:
   - `liveness.go` — Redis TTL expiration. Uses a CASE expression
     so the conditional is one UPDATE (no extra round-trip for
     non-external runtimes, no TOCTOU between runtime read and
     status write).
   - `healthsweep.go::sweepStaleRemoteWorkspaces` — DB-side
     last_heartbeat_at age scan. This sweep is already external-
     only by query filter, so the UPDATE just hard-codes the new
     status.

   The Docker-side `sweepOnlineWorkspaces` keeps `offline` —
   recovery there is "restart the container", not "re-register from
   the operator's box".

Why awaiting_agent over offline for external:
- Matches the status the workspace was created in (workspace.go:333).
- The CLI re-registers on every invocation; awaiting_agent → online
  is the natural transition. offline is a terminal-feeling status
  that implies operator intervention is needed.
- An operator who closed their laptop overnight should see
  awaiting_agent in canvas, not 'offline (something is wrong)'.

Test plan:
- Existing: 9 `resolveDeliveryMode` test sites updated to the new
  query shape. Sqlmock now reads `delivery_mode, runtime` columns.
- New: TestRegister_ExternalRuntime_DefaultsToPoll asserts the
  external→poll branch. TestRegister_NonExternalRuntime_StillDefaultsToPush
  guards against the new branch overshooting (langgraph keeps push).
- Liveness: regex updated to match the CASE expression.
- Healthsweep: `TestSweepStaleRemoteWorkspaces_MarksStaleAwaitingAgent`
  (renamed for grep-ability), Docker-side sweepOnlineWorkspaces test
  unchanged (verified to still match `'offline'`).
- Full handlers + registry suite green under -race (12.873s + 2.264s).

No migration needed — `status` is a free-form text column; both
'offline' and 'awaiting_agent' are existing values used elsewhere
(workspace.go uses awaiting_agent on initial external creation).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 06:39:57 -07:00
Hongming Wang
233a912cbe test(provision): direct unit tests for readOrLazyHealInboundSecret
The helper landed in #2376 and is exercised via chat_files + registry
integration tests. Those tests conflate the helper's behavior with the
caller's response shape — a future refactor that broke the (secret,
healed, err) contract subtly (e.g. returning healed=true on a
read-success path, or swallowing a mint error) might still pass them.

Adds 4 direct sub-tests pinning each branch of the contract:

  - secret already present → (s, false, nil)
  - secret missing, mint succeeds → (minted, true, nil)
  - secret missing, mint fails → ("", false, err)
  - read fails (non-NoInboundSecret) → ("", false, err)

Each sub-case asserts the return tuple shape AND mock.ExpectationsWereMet
(for the success path) so a future helper change that skips a DB op
trips the gate immediately.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 04:41:13 -07:00
Hongming Wang
30a569c742 refactor: extract readOrLazyHealInboundSecret to dedup chat_files + registry
The lazy-heal-on-miss pattern landed in two places this session:
PR #2372 (chat_files.go::resolveWorkspaceForwardCreds — Upload + Download)
and PR #2375 (registry.go::Register). Both implementations did the same
thing:

  read → if ErrNoInboundSecret then mint inline → return outcome

Different response-shape requirements but the same core mechanic. Three
sites' worth of drift potential: any future heal-time condition we add
(audit log, alert, secret rotation, observability) had to be applied to
each site, with partial application silently re-opening the gap.

Fix: extract readOrLazyHealInboundSecret in workspace_provision_shared.go
returning (secret, healed, err). Each caller maps the outcome to its
response shape:

  - chat_files: healed=true → 503 with retry hint; err != nil → 503 with
    RFC-#2312 reprovision hint
  - registry:   healed=true|false + err==nil → include in response;
    err != nil → omit field (workspace can retry on next register)

Net effect:

  - Single source of truth for the read+heal mechanic
  - Response-shape decisions stay in callers (they DO differ per feature)
  - Future heal-time conditions go in one place
  - Behavior preserved: existing TestRegister_NoInboundSecret_LazyHeals,
    TestRegister_NoInboundSecret_LazyHealMintFailureOmitsField,
    TestChatUpload_NoInboundSecret_LazyHeal*,
    TestChatDownload_NoInboundSecret_LazyHeal* all pass unchanged

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 04:11:43 -07:00
Hongming Wang
f3f5c4537b fix(registry): lazy-heal platform_inbound_secret on register for legacy workspaces
Pre-fix: a legacy SaaS workspace with NULL platform_inbound_secret
needed two round-trips before chat upload worked:

  1. Workspace registers → response missing platform_inbound_secret
  2. User attempts chat upload → chat_files lazy-heals platform-side
     (RFC #2312 backfill) → 503 + retry-after
  3. Workspace heartbeats → register response now includes the
     freshly-minted secret → workspace writes /configs/.platform_inbound_secret
  4. User retries chat upload → workspace bearer matches → 200

The platform-side lazy-heal in chat_files.go (#2366) closes the
existing-workspace gap, but the user-visible round-trip dance is
still ugly.

Fix: lazy-heal at register time too. When ReadPlatformInboundSecret
returns ErrNoInboundSecret, mint inline and include the freshly-
minted secret in the register response. Collapses the dance to a
single round-trip:

  1. Workspace registers → response includes lazy-healed secret
  2. User attempts chat upload → workspace bearer matches → 200

Failure model: best-effort. Mint failure logs and falls through to
omitting the field (workspace will retry on next register call).
The 200 response status is preserved — register success doesn't
hinge on the inbound-secret heal.

Tests:

  - TestRegister_NoInboundSecret_LazyHeals: pins the success branch.
    Mocks the UPDATE explicitly + asserts ExpectationsWereMet, so a
    regression that skipped the mint would fail loudly. Replaces
    the prior TestRegister_NoInboundSecret_OmitsField which
    "passed" on this branch only because sqlmock-unmatched-UPDATE
    coincidentally drove the omit-field error path.
  - TestRegister_NoInboundSecret_LazyHealMintFailureOmitsField:
    pins the failure branch — explicit UPDATE error → 200 + field
    absent.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 03:44:50 -07:00
Hongming Wang
343e164f5f
Merge pull request #2374 from Molecule-AI/auto/wsauth-token-lookup-helper
refactor(wsauth): extract lookupTokenByHash to dedup auth predicate across 3 callers
2026-04-30 10:14:40 +00:00
Hongming Wang
64822dac49 refactor(wsauth): extract lookupTokenByHash to dedup auth predicate across 3 callers
ValidateToken, WorkspaceFromToken, and ValidateAnyToken each duplicated
the same JOIN+WHERE auth predicate:

    FROM workspace_auth_tokens t
    JOIN workspaces w ON w.id = t.workspace_id
    WHERE t.token_hash = $1
      AND t.revoked_at IS NULL
      AND w.status != 'removed'

Same drift class as the SaaS provision-mint bug fixed in #2366. A
future safety addition (e.g. exclude paused workspaces from auth) had
to be applied to all three queries; a partial application would
silently re-open one auth path while closing the others.

Fix: hoist the predicate into lookupTokenByHash, which projects
(id, workspace_id) — the union of fields any caller needs. Each
public function picks what it uses:

  - ValidateToken      — needs both (compares workspaceID, updates last_used_at by id)
  - WorkspaceFromToken — needs workspace_id
  - ValidateAnyToken   — needs id

The trivial perf cost of selecting one extra column per call is worth
the single-source-of-truth guarantee for the auth predicate.

Test mock updates: two upstream test files (a2a_proxy_test, middleware
wsauth_middleware_test{,_canvasorbearer_test}) had hand-typed regex
matchers and row shapes pinned to the per-function SELECT projection.
Updated to the unified shape; behavior is unchanged.

All wsauth + middleware + handlers + full-module tests green.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 03:11:38 -07:00
Hongming Wang
17760e10d2
Merge pull request #2373 from Molecule-AI/auto/admin-test-token-mock-coverage
test(admin_test_token): pin ADMIN_TOKEN IDOR-fix (#112) gate behavior
2026-04-30 10:02:13 +00:00
Hongming Wang
e403d74a3d test(admin_test_token): pin ADMIN_TOKEN IDOR-fix (#112) gate behavior
The admin test-token endpoint has a critical security check at
admin_test_token.go:64-72 — the IDOR fix from #112 that requires an
explicit ADMIN_TOKEN bearer when the env var is set. Pre-fix, the
route accepted ANY bearer that matched a live org token, allowing
cross-org test-token minting (and therefore cross-org workspace
authentication). The current code uses subtle.ConstantTimeCompare
against ADMIN_TOKEN.

Test coverage was zero. The existing tests exercised the
ADMIN_TOKEN-unset path (local dev / CI) but never set ADMIN_TOKEN.
A regression that:

  - removed the os.Getenv("ADMIN_TOKEN") check
  - inverted the comparison
  - replaced ConstantTimeCompare with bytes.Equal (timing leak)
  - re-introduced the AdminAuth fallback that allows org tokens

would not fail any test, and the breakage would re-open the IDOR
that #112 closed.

Adds four tests covering the gate matrix:

  - ADMIN_TOKEN set + no Authorization header → 401
  - ADMIN_TOKEN set + wrong Authorization → 401
  - ADMIN_TOKEN set + correct Authorization → 200
  - ADMIN_TOKEN unset + no Authorization → 200 (gate bypassed safely)

The 4-row matrix pins the gate's full truth table: any regression in
either dimension (gate enabled/disabled, header correct/wrong) trips
exactly one test.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 02:59:08 -07:00
Hongming Wang
264e726672
Merge pull request #2372 from Molecule-AI/auto/chat-files-resolve-creds-helper
refactor(chat_files): extract resolveWorkspaceForwardCreds shared by Upload+Download
2026-04-30 09:54:56 +00:00
Hongming Wang
501a42d753 refactor(chat_files): extract resolveWorkspaceForwardCreds shared by Upload+Download
The 50-line "resolve URL + read inbound secret + lazy-heal on miss"
block was duplicated nearly verbatim between Upload and Download
handlers. Drift-prone — same class of risk as the original SaaS
provision drift fixed in #2366. A future change like:

  - secret rotation (re-mint when the row's older than X)
  - per-feature audit logging
  - additional fail-closed conditions

would have to be applied to both handlers, and a partial application
that healed Upload but skipped Download would surface only at runtime.

Fix: hoist the shared logic into resolveWorkspaceForwardCreds. The
function takes an op label ("upload"/"download") used in log messages
+ the 503 RFC-#2312 detail copy so operators can still distinguish
which feature ran. Both handlers reduce to:

    wsURL, secret, ok := resolveWorkspaceForwardCreds(c, ctx, workspaceID, "upload")
    if !ok { return }

Net -20 lines (helper amortizes the 50-line block across both call
sites). Existing test coverage (TestChatUpload_NoInboundSecret_*,
TestChatDownload_NoInboundSecret_* from PR #2370) covers all four
branches of the shared helper.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 02:51:53 -07:00
Hongming Wang
29368dd749
Merge pull request #2371 from Molecule-AI/auto/team-expand-mint-fix
test(provision): pin PARENT_ID env injection contract in prepareProvisionContext
2026-04-30 09:45:19 +00:00
Hongming Wang
4ba12668f0 test(provision): pin PARENT_ID env injection contract in prepareProvisionContext
#2367 moved PARENT_ID env injection from inline TeamHandler.Expand
into the shared prepareProvisionContext (sourced from
payload.ParentID). The test was missing — a regression that:
  - dropped the injection
  - inverted the nil-check
  - leaked an empty PARENT_ID="" into env

would not fail any existing test, but workspace/coordinator.py reads
PARENT_ID on startup to track parent-child relationship, so the
breakage would surface only at runtime.

Adds TestPrepareProvisionContext_ParentIDInjection with three
sub-cases:
  - nil ParentID → no PARENT_ID env
  - empty-string ParentID → no PARENT_ID env (don't pollute)
  - set ParentID → PARENT_ID env equals value

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 02:41:41 -07:00
Hongming Wang
ab6bcc030c
Merge pull request #2370 from Molecule-AI/auto/lazy-heal-test-coverage
test(chat_files): pin lazy-heal mint contract for both Upload and Download
2026-04-30 09:41:40 +00:00
Hongming Wang
6c065a02e6 test(chat_files): pin lazy-heal mint contract for both Upload and Download
The 2026-04-30 lazy-heal fix in chat_files.go (PR #2366) ATTEMPTS to
mint platform_inbound_secret on miss so legacy workspaces self-heal
without requiring destructive reprovision. The pre-existing
TestChatUpload_NoInboundSecret + TestChatDownload_NoInboundSecret
tests asserted the 503 response shape but did NOT pin that the mint
UPDATE actually fires — they happened to exercise the mint-failure
branch (sqlmock unmatched UPDATE = error = "Failed to mint" code path
returns 503 with "RFC #2312" detail, which still passed the original
assertions).

This means a regression that:
  - skipped the lazy-heal mint entirely
  - inverted the success/failure response branches
  - moved the mint to a different code path

would not fail those tests.

Fix:

  - TestChatUpload_NoInboundSecret_LazyHeal: mock the UPDATE
    successfully; assert sqlmock.ExpectationsWereMet (mint MUST run)
    + body contains "retry" + "30" (success branch).
  - TestChatUpload_NoInboundSecret_LazyHealFailure: mock the UPDATE
    to fail; assert body contains "Reprovision" (failure branch).
  - Same pair for the Download handler — independent code path means
    independent test.

Pins both branches of both handlers (4 tests) so future drift trips
the gate.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 02:38:28 -07:00
Hongming Wang
bb52a1a365 fix(team): delegate Expand child-provisioning to shared mint pipeline (#2367)
Closes #2367.

TeamHandler.Expand provisioned child workspaces by directly calling
h.provisioner.Start, skipping mintWorkspaceSecrets and every other
preflight (secrets load, env mutators, identity injection, missing-env,
empty-config-volume auto-recover). Children shipped with NULL
platform_inbound_secret + never-issued auth_token — same drift class as
the SaaS bug just fixed in PR #2366, found while exercising a stronger
gate against this package.

Fix:

- TeamHandler now holds *WorkspaceHandler. Expand delegates each child
  provision to wh.provisionWorkspace, picking up the shared
  prepare/mint/preflight pipeline automatically. Future provision-time
  steps go in ONE place and team-expand inherits them.
- prepareProvisionContext gains PARENT_ID env injection sourced from
  payload.ParentID (which Expand now populates). This preserves the
  signal workspace/coordinator.py reads on startup, without threading
  env through provisioner.WorkspaceConfig manually.
- NewTeamHandler signature gains *WorkspaceHandler; router passes it.

Gate upgrade:

- TestProvisionFunctions_AllCallMintWorkspaceSecrets is now
  behavior-based: it walks every FuncDecl in the package and flags any
  function that calls h.provisioner.Start or h.cpProv.Start without
  also calling mintWorkspaceSecrets. Drift-resistant by construction —
  a future provision function with any name still trips the gate.
- Replaces the name-list version from PR #2366. The name list missed
  Expand precisely because Expand wasn't named provision*; the
  behavior-based detector caught it spontaneously when prototyped.

Tests: full workspace-server module green; gate previously verified to
fire red on Expand pre-fix and on deliberate mintWorkspaceSecrets
removal.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 02:28:29 -07:00
Hongming Wang
3f8286ea47 fix(provision): share Docker+SaaS prepare path so both mint workspace secrets (RFC #2312)
Root cause of 2026-04-30 silent-503 chat-upload bug: provisionWorkspaceCP
(SaaS) skipped issueAndInjectInboundSecret while provisionWorkspaceOpts
(Docker) called it. Every prod SaaS workspace provisioned with NULL
platform_inbound_secret → upload returned 503 with the v2-enrollment
message on every attempt.

Structural fix:
- Extract prepareProvisionContext (secrets load, env mutators, preflight,
  cfg build), mintWorkspaceSecrets (auth_token + platform_inbound_secret),
  markProvisionFailed (broadcast + DB update) into workspace_provision_shared.go
- Refactor both provision modes to call the shared helpers
- Add provisionAbort struct so the missing-env failure class can carry its
  structured "missing" payload through the shared abort path
- Unify last_sample_error: previously the decrypt-fail path skipped it while
  others set it; users now see every failure class in the UI

Drift prevention:
- AST gate TestProvisionFunctions_AllCallMintWorkspaceSecrets asserts every
  function in the provisionFunctions set calls mintWorkspaceSecrets at least
  once (same shape as the audit-coverage gate from #335). New provision paths
  must either call mint or be added to provisionExemptFunctions with a
  one-line justification
- Behavioral test TestMintWorkspaceSecrets_PersistsInboundSecretInSaaSMode
  pins the contract: SaaS mode MUST persist platform_inbound_secret to the DB
  column even though it skips file injection

Existing-workspace recovery (chat_files.go lazy-heal):
- Upload + Download handlers detect NULL platform_inbound_secret and call
  IssuePlatformInboundSecret inline, returning 503 with retry_after_seconds=30
- Self-heals workspaces that were provisioned before this fix without
  requiring destructive reprovision

Tests: full handlers + workspace-server module green; AST gate verified to
fire red on deliberate violation (commented-out mint call surfaces the
exact function name + actionable remediation message).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 02:18:08 -07:00
Hongming Wang
8e508a7a2f fix(a2a): cover CF 521/522/523 in dead-origin status set
Independent review on PR #2362 caught: the dead-agent classifier at
a2a_proxy.go included 502/503/504/524 but missed the rest of the CF
origin-failure family (521/522/523), which are MORE indicative of a
dead EC2 than 524:

- 521 "Web server is down" — CF can't open TCP to origin (most direct
  dead-EC2 signal; fires when the workspace EC2 has been terminated
  and CF still has the CNAME pointing at it).
- 522 "Connection timed out" — TCP didn't complete in ~15s (typical
  of SG/NACL flap or agent process hung on accept).
- 523 "Origin is unreachable" — CF can't route to origin (DNS gone,
  network path broken).

Pre-fix any of these would propagate as-is to the canvas and the user
would see a 5xx without the reactive auto-restart firing — exactly
the SaaS-blind class of failure PR #2362 was meant to close.

Refactor: extracted isUpstreamDeadStatus(int) helper so the matrix is
in one place, with TestIsUpstreamDeadStatus locking in 18 status
codes (7 dead, 11 not-dead including 520 and 525 which look CF-shaped
but indicate different failures).

Also tightened TestStopForRestart_NoProvisioner_NoOp per the same
review: now uses sqlmock.ExpectationsWereMet to assert the dispatcher
doesn't touch the DB on the both-nil path.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 01:39:04 -07:00
Hongming Wang
747c12e582 test(a2a): protocol-shape replay corpus gate (#2345 follow-up)
Backward-compat replay gate for the A2A JSON-RPC protocol surface.
Every PR that touches normalizeA2APayload OR bumps the a-2-a-sdk
version pin runs every shape in testdata/a2a_corpus/ through the
current code and asserts:

  valid/   — every shape MUST parse without error and produce a
             canonical v0.3 payload (params.message.parts list).

  invalid/ — every shape MUST be rejected with the documented
             status code and error substring.

What this prevents

The 2026-04-29 v0.2 → v0.3 silent-drop bug (PR #2349) shipped
because the SDK bump PR didn't replay v0.2-shaped inputs against
the new code; the shape-mismatch surfaced only in production when
the receiver's Pydantic validator silently rejected inbound
messages.

This gate would have caught it pre-merge. Hand-verified: reverting
the v0.2 string→parts shim in normalizeA2APayload fails 3 of the
v0.2 corpus entries with the exact rejection class the production
bug exhibited.

Corpus contents (11 entries)

valid/ (10):
  v0_2_string_content              — basic v0.2 (the broken case)
  v0_2_string_content_no_message_id — v0.2 + auto-fill messageId
  v0_2_list_content                — v0.2 with content as Part list
  v0_3_parts_text_only             — canonical v0.3
  v0_3_parts_multi_text            — multi-Part list
  v0_3_parts_with_file             — multimodal (text + file)
  v0_3_parts_with_context          — contextId for multi-turn
  v0_3_streaming_method            — message/stream variant
  v0_3_unicode_text                — emoji + multi-script
  v0_3_long_text                   — 10KB text Part
  no_jsonrpc_envelope              — bare params/method without
                                     outer envelope (legacy senders)

invalid/ (3):
  no_content_or_parts              — message has neither field
  content_is_integer               — wrong type for v0.2 content
  content_is_bool                  — wrong type, separate from int
                                     so the failure msg identifies
                                     which type-class regressed

Plus 4 inline malformed-JSON cases (truncated, not-JSON, empty,
whitespace) that can't be expressed as JSON corpus entries.

Coverage tests

The gate has 4 test functions:

1. TestA2ACorpus_ValidShapesParse        — replay valid/ corpus,
   assert no error + canonical v0.3 output (parts list non-empty,
   messageId non-empty, content field deleted).
2. TestA2ACorpus_InvalidShapesRejected   — replay invalid/ corpus,
   assert rejection matches recorded status + error substring.
3. TestA2ACorpus_MalformedJSONRejected   — inline cases for
   non-parseable bodies.
4. TestA2ACorpus_HasMinimumCoverage      — at least one v0.2 +
   one v0.3 entry exists (loses neither side of the bridge).
5. TestA2ACorpus_EveryEntryHasMetadata   — _comment/_added/_source
   on every entry per the README policy; _expect_error and
   _expect_status on invalid entries.

Documentation

testdata/a2a_corpus/README.md describes the corpus contract:
  - When to add entries (new SDK shape, new production-observed
    shape).
  - When NOT to add (test scaffolding, hypothetical futures).
  - Removal policy (breaking change, deprecation window required).

Verification

- All 24 corpus subtests pass on current main.
- Hand-test: revert the v0.2 compat shim → 3 v0.2 entries fail
  the gate with the exact rejection class the production bug
  exhibited. Confirmed.
- Whole-module go test ./... green.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 01:26:02 -07:00
Hongming Wang
a27cf8f39f fix(restart): extract stopForRestart helper + add 524 to dead-agent list
Addresses code-review C1 (test goroutine race) and I2 (CF 524) on PR #2362.

C1: TestRunRestartCycle_SaaSPath_DispatchesViaCPProv invoked runRestartCycle
end-to-end, which spawns `go h.sendRestartContext(...)`. That goroutine
outlived the test, then read db.DB while the next test's setupTestDB wrote
to it — DATA RACE under -race, cascading 30+ failures across the handlers
suite. Refactored: extracted `stopForRestart(ctx, id)` from runRestartCycle
as a pure dispatcher, and rewrote the SaaS-path test to call it directly
(no async goroutine spawned). Added a no-provisioner no-op guard test.

I2: Cloudflare 524 ("origin timed out") now triggers maybeMarkContainerDead
alongside 502/503/504. Same upstream signal — origin agent unresponsive.

Verified `go test -race -count=1 ./internal/handlers/...` green locally.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 00:58:22 -07:00
Hongming Wang
28b4e38002 fix(restart): branch provisionWorkspace dispatch on cpProv (PR #2362 amendment)
Independent review of #2362 caught a Critical gap: the previous commit
fixed the Stop dispatch in runRestartCycle but left the provisionWorkspace
dispatch unconditionally Docker-only. So on SaaS the auto-restart cycle
would Stop the EC2 successfully (good), then NPE inside provisionWorkspace's
`h.provisioner.VolumeHasFile` call. coalesceRestart's recover()-without-
re-raise (a deliberate platform-stability safeguard) silently swallowed
the panic, leaving the workspace permanently stuck in status='provisioning'
because the UPDATE on workspace_restart.go:450 had already run.

Net pre-amendment effect on SaaS: dead agent → structured 503 (good) →
workspace flipped to 'offline' (good) → cpProv.Stop succeeded (good) →
provisionWorkspace NPE swallowed (bad) → workspace permanently
'provisioning' until manual canvas restart. The headline claim of #2362
("SaaS auto-restart now works") was false on the path it shipped.

Fix: dispatch the reprovision call the same way every other call site
in the package does (workspace.go:431-433, workspace_restart.go:197+596) —
branch on `h.cpProv != nil` and call provisionWorkspaceCP for SaaS,
provisionWorkspace for Docker.

Tests:

- New TestRunRestartCycle_SaaSPath_DispatchesViaCPProv asserts cpProv.Stop
  is called when the SaaS path runs (would have caught the NPE if
  provisionWorkspace had been called instead).

- fakeCPProv updated: methods record calls and return nil/empty by
  default rather than panicking. The previous "panic on unexpected call"
  pattern was unsafe — the panic fires on the async restart goroutine
  spawned by maybeMarkContainerDead AFTER the test assertions ran, so
  the test passed by accident even though the production path was
  broken (which is exactly how the Critical bug landed).

- Existing tests still pass (full handlers + provisioner suites green).

Branch-count audit refresh:

  runRestartCycle dispatch decisions:
    1. h.provisioner != nil → provisioner.Stop + provisionWorkspace ✓ (existing tests)
    2. h.cpProv != nil       → cpProv.Stop + provisionWorkspaceCP   ✓ (NEW test)
    3. both nil              → coalesceRestart never called (RestartByID gate) ✓

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 00:35:51 -07:00
Hongming Wang
9f35788aee fix(a2a): detect dead EC2 agents on upstream 5xx + reactive auto-restart for SaaS
Class-of-bugs fix surfaced by hongmingwang.moleculesai.app's canvas chat
to a dead workspace returning a generic Cloudflare 502 page on
2026-04-30. Three independent gaps in the reactive-health path that
together leak dead-agent failures to canvas with no auto-recovery.

## Bug 1 — maybeMarkContainerDead is a no-op for SaaS tenants

`maybeMarkContainerDead` only consulted `h.provisioner` (local Docker
provisioner). SaaS tenants set `h.cpProv` (CP-backed EC2 provisioner)
and leave `h.provisioner` nil — so the function early-returned false
on every call and dead EC2 agents never triggered the offline-flip /
broadcast / restart cascade.

Fix: extend `CPProvisionerAPI` interface with `IsRunning(ctx, id)
(bool, error)` (already implemented on `*CPProvisioner`; just needs
to surface on the interface). `maybeMarkContainerDead` now branches:
local-Docker path uses `h.provisioner.IsRunning`; SaaS path uses
`h.cpProv.IsRunning` which calls the CP's `/cp/workspaces/:id/status`
endpoint to read the EC2 state.

## Bug 2 — RestartByID short-circuits on `h.provisioner == nil`

Same shape as Bug 1: the auto-restart cascade triggered by
`maybeMarkContainerDead` calls `RestartByID` which short-circuited
when the local Docker provisioner was missing. So even if Bug 1 were
fixed, the workspace-offline state would never recover.

Fix: change the gate to `h.provisioner == nil && h.cpProv == nil`
and update `runRestartCycle` to branch on which provisioner is
wired for the Stop call. (The HTTP `Restart` handler already does
this branching correctly — we're just bringing the auto-restart path
to parity.)

## Bug 3 — upstream 502/503/504 propagated as-is, masked by Cloudflare

When the agent's tunnel returns 5xx (the "tunnel up but no origin"
shape — agent process dead but cloudflared connection still healthy),
`dispatchA2A` returns successfully at the HTTP layer with a 5xx body.
`handleA2ADispatchError`'s reactive-health path doesn't run because
that path is only triggered on transport-level errors. The pre-fix
code propagated the 502 status to canvas; Cloudflare in front of the
platform then masked the 502 with its own opaque "error code: 502"
page, hiding any structured response and any Retry-After hint.

Fix: in `proxyA2ARequest`, when the upstream returns 502/503/504, run
`maybeMarkContainerDead` BEFORE propagating. If IsRunning confirms
the agent is dead → return a structured 503 with restarting=true +
Retry-After (CF doesn't mask 503s the same way). If running, propagate
the original status (don't recycle a healthy agent on a transient
hiccup — it might have legitimately returned 502).

## Drive-by — a2aClient transport timeouts

a2aClient was `&http.Client{}` with no Transport timeouts. When a
workspace's EC2 black-holes TCP connects (instance terminated mid-flight,
SG flipped, NACL bug), the OS default is 75s on Linux / 21s on macOS —
long enough for Cloudflare's ~100s edge timeout to fire first and
surface a generic 502. Added DialContext (10s connect), TLSHandshake
(10s), and ResponseHeaderTimeout (60s). Client.Timeout DELIBERATELY
unset — that would pre-empt slow-cold-start flows (Claude Code OAuth
first-token, multi-minute agent synthesis). Long-tail body streaming
is still governed by per-request context deadline.

## Tests

- `TestMaybeMarkContainerDead_CPOnly_NotRunning` — IsRunning(false) →
  marks workspace offline, returns true.
- `TestMaybeMarkContainerDead_CPOnly_Running` — IsRunning(true) →
  no offline-flip, returns false (don't recycle a healthy agent).
- `TestProxyA2A_Upstream502_TriggersContainerDeadCheck` — agent server
  returns 502 + cpProv reports dead → caller gets 503 with restarting=
  true and Retry-After: 15.
- `TestProxyA2A_Upstream502_AliveAgent_PropagatesAsIs` — same upstream
  502 but cpProv reports running → propagates 502 (existing behavior;
  safety check that prevents over-eager recycling).
- Existing `TestMaybeMarkContainerDead_NilProvisioner` /
  `TestMaybeMarkContainerDead_ExternalRuntime` still pass.
- Full handlers + provisioner test suites pass.

## Impact

Pre-fix: dead EC2 agent on a SaaS tenant → CF-masked 502 to canvas, no
auto-recovery, manual restart from canvas required.

Post-fix: dead EC2 agent on a SaaS tenant → structured 503 with
restarting=true + Retry-After to canvas, workspace flipped to offline,
auto-restart cycle triggered. Canvas can show a user-actionable
"agent is restarting, please wait" message instead of a generic 502.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 00:28:22 -07:00
Hongming Wang
a81b0e1e3d feat(activity): since_id cursor on GET /activity (#2339 PR 3)
Telegram getUpdates / Slack RTM shape: poll-mode workspaces pass the id
of the last activity_logs row they consumed, server returns rows
strictly after in chronological (ASC) order. Existing callers that don't
pass since_id keep DESC + most-recent-N — backwards-compatible.

Cursor lookup is scoped by workspace_id so a caller cannot enumerate or
peek at another workspace's events by passing a UUID belonging to a
different workspace. Cross-workspace and pruned cursors both return
410 Gone — no information leak (caller cannot distinguish "row never
existed" from "row exists but you can't see it").

since_id + since_secs both apply (AND). When since_id is set the order
flips to ASC because polling consumers need recorded-order; the recent-
feed shape (no since_id) keeps DESC.

Tests:
- TestActivityHandler_SinceID_ReturnsNewerASC — cursor lookup → main
  query with cursorTime + ASC ordering.
- TestActivityHandler_SinceID_CursorNotFound_410 — pruned/unknown cursor.
- TestActivityHandler_SinceID_CrossWorkspaceCursor_410 — UUID belongs to
  another workspace, scoped lookup hides it (same 410 path, no leak).
- TestActivityHandler_SinceID_CombinedWithSinceSecs — placeholder index
  arithmetic with both filters.

Stacked on #2353 (PR 2: poll-mode short-circuit).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 22:51:52 -07:00
Hongming Wang
706a388806
Merge pull request #2353 from Molecule-AI/auto/issue-2339-pr2-poll-shortcircuit-v2
feat(a2a): poll-mode short-circuit in ProxyA2A (#2339 PR 2)
2026-04-30 05:29:03 +00:00
Hongming Wang
91a1d5377d feat(a2a): poll-mode short-circuit in ProxyA2A (#2339 PR 2)
Skip SSRF/dispatch and queue to activity_logs for delivery_mode=poll
workspaces. The polling agent (e.g. molecule-mcp-claude-channel on an
operator's laptop) consumes via GET /activity?since_id= in PR 3 — no
public URL needed.

Order: budget -> normalize -> lookupDeliveryMode short-circuit ->
resolveAgentURL. Normalizing before the short-circuit keeps the
JSON-RPC method name on the activity_logs row so the polling agent
can dispatch correctly.

Fail-closed-to-push: any DB error reading delivery_mode defaults to
push (loud + recoverable) rather than poll (silent drop).

Tests:
- TestProxyA2A_PollMode_ShortCircuits_NoSSRF_NoDispatch — core invariant:
  no resolveAgentURL, no Do(), records to activity_logs, returns 200
  {status:"queued",delivery_mode:"poll",method:"message/send"}.
- TestProxyA2A_PushMode_NoShortCircuit — push path unaffected; the agent
  server actually receives the request.
- TestProxyA2A_PollMode_FailsClosedToPush — DB error on mode lookup
  must NOT silently queue; falls through to the push path.

Stacked on #2348 (PR 1: schema + register flow).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 22:22:28 -07:00
Hongming Wang
3da2392f95
Merge pull request #2348 from Molecule-AI/auto/issue-2339-pr1-delivery-mode
feat(workspaces): delivery_mode column + poll-mode register flow (#2339 PR 1)
2026-04-30 05:18:03 +00:00