The drift gate's monorepoRoot walk-up looked for workspace-configs-templates/
which is gitignored locally and doesn't exist in this repo at all (the
canonical script lives in molecule-ai-workspace-template-hermes). Test
failed on CI from day one with "could not find monorepo root".
Two layered fixes in one PR:
1. Vendor upstream derive-provider.sh as testdata/ + drop monorepoRoot.
The vendored copy has a header pointing operators at the upstream
source and a one-line cp command for refresh. Test now reads two
files (vendored shell + workspace_provision.go) via package-relative
paths — Go test sets cwd to the package dir, so this is hermetic
without any walk-up gymnastics.
2. Update the case-statement regex to match upstream's renamed variable
(${_HERMES_MODEL} since v0.12.0, the resolved value of
HERMES_INFERENCE_MODEL with a HERMES_DEFAULT_MODEL legacy fallback).
Regex now accepts either spelling so a future rename fails loudly
on the parser-sanity check rather than silently returning empty.
Vendoring upstream surfaced real drift the gate was designed to catch:
upstream v0.12.0 added 12 provider prefixes that deriveProviderFromModelSlug
didn't handle (xai/grok, bedrock/aws, tencent/tencent-tokenhub, gmi,
qwen-oauth, lmstudio/lm-studio, minimax-oauth, alibaba-coding-plan,
google-gemini-cli, openai-codex, copilot-acp, copilot). Without these,
Save+Restart on a workspace using one of those prefixes would persist
LLM_PROVIDER="" and the next boot would fall back to derive-provider.sh's
runtime *=auto branch — losing the user's explicit choice on every restart.
Added all 12 case clauses + 16 new table-driven test cases (covering
both canonical and aliased forms). Drift gate now passes; future
upstream additions will fail loudly with a "DRIFT: ..." message
pointing the engineer at the missing case.
Task: #242
PR #2535 added a Go port of derive-provider.sh
(deriveProviderFromModelSlug) so workspace-server can persist
LLM_PROVIDER into workspace_secrets at provision time. This created
two sources of truth — if a future PR adds a provider prefix to one
without the other, the platform's persisted LLM_PROVIDER silently
disagrees with what the container's derive-provider.sh produces at
boot, with no test going red.
This adds a hermetic drift gate that:
1. Parses workspace-configs-templates/hermes/scripts/derive-provider.sh
with regex (handling both single-line `pat/*) PROVIDER="x" ;;`
clauses and multi-line conditional clauses) to build a
map[prefix]provider.
2. Walks workspace_provision.go's AST with go/ast, finds
deriveProviderFromModelSlug, and extracts every case-clause
prefix → return-string-literal pair.
3. Cross-checks both directions and accepts only the two documented
divergences (nousresearch/* and openai/* both → "openrouter" at
provision time because derive-provider.sh's runtime-env checks
aren't loaded yet) via a hardcoded acceptedDivergences map.
4. Fails with an actionable message that names both files and
suggests the exact fix (add the case OR add to divergence list
with a comment).
Pattern: behavior-based AST gate from PR #2367 / memory feedback —
pin the invariant by what the function maps, not by what it's named.
Stdlib-only (go/ast, go/parser, go/token, regexp); no network, no DB,
no docker — reads two monorepo files in-process.
A second sanity-check test pins anchor prefixes the regex must find,
so a future shell-syntax change can't silently produce an empty map
and trivially pass the main gate.
Closes task #242.
Two follow-ups from PR #2543's multi-model code review (audit #253).
1. **Log silent yaml.Unmarshal errors (#256).** When a malformed
config.yaml made `yaml.Unmarshal(data, &raw)` fail, the affected
template silently disappeared from /templates with no trace —
operator could not distinguish "excluded due to parse error" from
"never existed." That widened a real foot-gun once PR #2543 added
structured top-level `providers:` (a string-shaped top-level
`providers:` decoded into `[]providerRegistryEntry` would fail and
drop the whole entry). Now logs `templates list: skip <id>:
yaml.Unmarshal: <err>` and continues with the rest.
2. **Coexistence test (#257 part 1).** PR #2543 covered the structured
registry and slug list in isolation. claude-code-default in
production ships BOTH: top-level `providers:` (structured registry,
2 entries) AND `runtime_config.providers:` (slug list, 3 entries).
New `TestTemplatesList_BothProviderShapesCoexist` mirrors that
layout, asserts both shapes surface independently with no
cross-talk (e.g. a slug-only entry like `anthropic-api` does NOT
synthesize a stub in the structured registry), and pins the JSON
wire-shape for both fields side-by-side.
3. **`base_url: null` decoding assertion (#257 part 3).** Adds an
explicit `got[0].BaseURL == ""` check in the existing
`TestTemplatesList_SurfacesProviderRegistry` test, locking in the
`string` (not `*string`) type. A future change to `*string` would
surface as JSON `null` and break canvas's "no base_url = use
provider defaults" branch — caught loudly by this assertion.
Tests: 11 TestTemplatesList_* now green, including the new
MalformedYAMLLogsAndSkips and BothProviderShapesCoexist.
The remaining piece of #257 — renaming `Providers []string` JSON tag
to `provider_slugs` — requires coordinated canvas updates across 4
files and is intentionally deferred to a separate PR (no canvas
churn while user is mid-test).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the contract drift caught by audit #253. Task #235 ("Server:
enrich /templates payload with structured providers") was marked
completed, but `templates.go` only ever emitted the
`runtime_config.providers []string` slug list — the structured
ProviderEntry shape (auth_env, model_prefixes, model_aliases, base_url)
the description promised was never plumbed.
Templates ship the structured registry under a TOP-LEVEL `providers:`
block (claude-code carries 6+ entries today; hermes still uses the
slug list). Both shapes coexist and are independent — surface them as
two separate fields:
- `providers` → existing []string slug list (unchanged)
- `provider_registry` → new []providerRegistryEntry (structured)
The canvas's ProviderModelSelector comment block already anticipates
this ("Templates that ship explicit vendor metadata (future) should
override the heuristic."). With this field in place, the canvas can
optionally drop its prefix-inference fallback for templates that ship
an explicit registry — separate PR. Today's change is purely additive
on the server side; no canvas change required.
Tests:
- TestTemplatesList_SurfacesProviderRegistry: order preservation +
field plumbing on a claude-code-shaped fixture (oauth + minimax)
+ JSON wire-shape gate to catch struct-tag renames.
- TestTemplatesList_OmitsProviderRegistryWhenAbsent: omitempty so
legacy templates (hermes, langgraph) don't emit `null` and break
Array.isArray on the canvas side.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Bug B fix, server-side complement to molecule-runtime PR #2538.
The runtime PR taught `workspace/config.py` to honour
`MODEL_PROVIDER` over `runtime_config.model` from the template's
verbatim YAML. This PR is the upstream half: workspace-server's
`applyRuntimeModelEnv` now sets `MODEL=<picked>` for **every**
runtime, not just hermes (which got `HERMES_DEFAULT_MODEL` already).
Pre-fix: applyRuntimeModelEnv's per-runtime switch only emitted
HERMES_DEFAULT_MODEL for hermes; every other runtime got nothing,
so the adapter read its template's default model from
/configs/config.yaml. Surfaced 2026-05-02 — picking MiniMax-M2 in
canvas → workspace booted with model=sonnet (claude-code template
default) and demanded CLAUDE_CODE_OAUTH_TOKEN.
Post-fix: MODEL is set unconditionally before the per-runtime switch.
HERMES_DEFAULT_MODEL stays for backwards compat. Adapters opt in by
reading os.environ["MODEL"] in their executor (claude-code adapter
already does this since the same Bug B fix; see
workspace-configs-templates/claude-code-default/adapter.py).
Tests
=====
- `TestApplyRuntimeModelEnv_SetsUniversalMODELForAllRuntimes`:
table-driven across claude-code/hermes/langgraph/crewai + empty-model
fallback + MODEL_PROVIDER-secret-fallback path. Adding a new
runtime = adding a row, not writing a new test.
- All 6 sub-cases pass + existing
`TestWorkspaceCreate_FirstDeploy_UnknownModel_OnlyMintModelProvider`
pin still green.
Why now
=======
This was authored alongside the runtime PR but stashed (not committed)
during a session-handoff cleanup. The molecule-runtime side shipped at
SHA 16ac895a and is live on PyPI as molecule-ai-workspace-runtime
0.1.84, but until the workspace-server side ships, the canvas-picked
MODEL env never reaches non-hermes adapters.
Caught by the systematic stash audit triggered by the user's
discovery that ProviderModelSelector had been similarly stashed.
Closes the workspace-server side of #246. Builds on merged #2538.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Deletes the 5 unsupported workspace_templates from manifest.json
(langgraph, crewai, autogen, deepagents, gemini-cli). The runtime
matrix is now claude-code / hermes / openclaw / codex — the four
templates with shipping images, working A2A integration, and active
CI publish-image cascades.
Mirrors the prune in:
- workspace-server/internal/handlers/runtime_registry.go
(fallbackRuntimes for dev/test contexts that boot without the
manifest mounted)
- workspace-server/internal/handlers/workspace_provision.go
(sanitizeRuntime: empty/unknown → "claude-code", was "langgraph";
removes the langgraph/deepagents-specific runtime_config skip
branch — they're no longer supported, so the block is dead)
- tests for both: rename TestEnsureDefaultConfig_LangGraph →
_Hermes, TestEnsureDefaultConfig_EmptyRuntimeDefaultsToLangGraph
→ _ClaudeCode, drop TestEnsureDefaultConfig_DeepAgents,
update TestSanitizeRuntime_Allowlist + the two
TestResolveRestartTemplate_* cases that pinned langgraph-default
as the safe-default name
Why this is safe: production reads manifest.json at boot and uses it
as the authoritative allowlist; the 5 removed runtimes have not
shipped working images for ≥1 release cycle. Any provision request
naming one will now coerce to claude-code (with a log line) instead
of returning a runtime that has no functioning template repo.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
When the canvas POSTs /workspaces with {model: "minimax/MiniMax-M2.7"},
the model slug was never written to workspace_secrets. The workspace
booted hermes once with HERMES_DEFAULT_MODEL set from payload.Model, but
on every subsequent restart applyRuntimeModelEnv's fallback chain found
nothing in envVars["MODEL_PROVIDER"] (because nothing wrote it) and
hermes silently fell through to the template default
(nousresearch/hermes-4-70b) — wrong provider keys → hermes gateway
401'd → /health poll failed → molecule-runtime never registered →
"container started but never called /registry/register".
Worse, LLM_PROVIDER was never written either (the canvas doesn't send
provider), so CP user-data wrote no provider: field to
/configs/config.yaml and derive-provider.sh fell through to PROVIDER=auto
on every custom-prefix slug.
Fix: after the workspace row commits, persist MODEL_PROVIDER (verbatim
slug) and LLM_PROVIDER (derived from slug prefix) to workspace_secrets.
LLM_PROVIDER is gating-only — derive-provider.sh remains the runtime
source of truth and can override at boot. Reuses extracted
setModelSecret / setProviderSecret helpers (refactored out of SetModel /
SetProvider gin handlers) so SQL stays in one place.
Symptom: failed-workspace 95ed3ff2 (2026-05-02).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Both restart paths (interactive Restart handler + auto-restart's
stopForRestart) used to log-and-continue on cpProv.Stop failure. After
PR #2500 made CPProvisioner.Stop surface CP non-2xx as an error, those
paths became the actual leak generator: every transient CP/AWS hiccup =
one orphan EC2 alongside the freshly provisioned one. The 13 zombie
workspace EC2s on demo-prep staging traced to this exact path.
Adds cpStopWithRetry helper with bounded exponential backoff (3 attempts,
1s/2s/4s). Different policy from workspace_crud.go's Delete handler:
Delete returns 500 to the client on Stop failure (loud-fail-and-block —
user asked to destroy, silent leak unacceptable), whereas Restart's
contract is "make the workspace alive again" — refusing to reprovision
strands the user with a dead workspace. So this helper retries to absorb
transient failures, then on exhaustion emits a structured `LEAK-SUSPECT`
log line for the (forthcoming) CP-side workspace orphan reconciler to
correlate. Caller proceeds to reprovision regardless.
ctx-cancel exits the retry early without sleeping the backoff (matters
during shutdown drain); the cancel path emits a distinct log line and
deliberately does NOT emit LEAK-SUSPECT — operator-cancel and
retry-exhaustion are different signals and conflating them would noise
up the orphan-reconciler queue with workspaces we never had a chance to
retry.
Tests: 5 behavior tests covering every branch (no-op, first-try success,
eventual success, exhaustion, ctx-cancel) + 1 AST gate that pins the
helper-only invariant (any future inline `h.cpProv.Stop(...)` in
workspace_restart.go fires the gate, mutation-tested).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
http.Client.Do only errors on transport failure — a CP 5xx (AWS
hiccup, missing IAM, transient outage) was silently treated as
success. Workspace row then flipped to status='removed' and the EC2
stayed alive forever with no DB pointer (the "orphan EC2 on a
0-customer account" scenario flagged in workspace_crud.go #1843).
Found while triaging 13 zombie workspace EC2s on demo-prep staging.
Adds a status-code check that returns an error tagged with the
workspace ID + status + bounded body excerpt, so the existing
loud-fail path in workspace_crud.go's Delete handler can populate
stop_failures and surface a 500. Body read is io.LimitReader-capped
at 512 bytes to keep error logs sane during a CP outage.
Tests: 4 new (5xx surfaces, 4xx surfaces, 2xx variants 200/202/204
all succeed, long body is truncated). Test-first verified — the
first three fail on the buggy code and all four pass on the fix.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two follow-ups from PR #2494's review:
1. Two new sweep tests exercise the lookup path through
sweepStuckProvisioning end-to-end:
- ManifestOverrideSparesRow: claude-code 11min old, manifest=20min
→ no UPDATE, no broadcast (sparing works through the sweeper)
- ManifestOverrideStillFlipsPastDeadline: claude-code 21min old,
manifest=20min → flipped + payload.timeout_secs=1200
Closes the gap that the unit-test on provisioningTimeoutFor alone
left open: a future refactor could drop the lookup arg from the
sweeper's call and only the unit test caught it. Verified by
regression-injecting `lookup→nil` in sweepStuckProvisioning — both
new tests fail, the old ones still pass.
2. addProvisionTimeoutMs now goes through ProvisionTimeoutSecondsForRuntime
instead of calling provisionTimeouts.get directly. Single accessor
path for the same data — the canvas response and the sweeper now
resolve identically by construction.
No production behavior change; tests + accessor cleanup only.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Real wiring gap discovered while investigating issue #2486 cluster of
prod claude-code workspaces failed at exactly 10m. The
runtimeProvisionTimeoutsCache (#2054 phase 2) reads
runtime_config.provision_timeout_seconds from each template's
config.yaml so the **canvas** spinner respects per-template timeouts —
but the **sweeper** in registry/provisiontimeout.go hardcoded 10 min
(claude-code) / 30 min (hermes) and never consulted the manifest. So a
template that declared a longer window had a UI that waited correctly
but a sweeper that killed the row at the hardcoded floor anyway.
Resolution order pinned by new TestProvisioningTimeout_ManifestOverride:
1. PROVISION_TIMEOUT_SECONDS env (ops-debug global override)
2. Template manifest lookup (per-runtime, beats hermes default too)
3. Hermes default (30 min — CP bootstrap-watcher 25 min + 5 min slack)
4. DefaultProvisioningTimeout (10 min)
Wiring:
- registry: new RuntimeTimeoutLookup function type, threaded through
StartProvisioningTimeoutSweep + sweepStuckProvisioning + the
pre-existing provisioningTimeoutFor.
- handlers: ProvisionTimeoutSecondsForRuntime exposes the cache's
lookup as a method so main.go can pass it without breaking the
handlers→registry import direction.
- cmd/server/main.go: wire wh.ProvisionTimeoutSecondsForRuntime into
the sweep boot.
Verified:
- go test -race ./... passes (every workspace-server package).
- Regression-injected the lookup arm: 3 manifest-override subcases
fail with the actual-vs-expected gap, confirming the new test is
load-bearing.
- The original two timeout tests (env-override, hermes default) keep
passing — `lookup=nil` argument preserves their semantics.
Operator action enabled: a template wanting a 15-min window can now
just set `runtime_config.provision_timeout_seconds: 900` in its
config.yaml and the sweeper honours it on the next workspace-server
restart.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Per review nit on PR #2491: the previous message ("a goroutine reached
cpProv.Start but never broadcast its failure") could mislead an
operator if Assertion 2 and 4 both fire — Assertion 4 also catches
"goroutine exited via an earlier path before reaching Start." Spell
both modes out and cross-reference Assertion 2.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Post-merge follow-up to PR #2487 review feedback:
1. guardAgainstReraise(fn) helper around every panic-test exercise. The
original RecoversAndMarksFailed had its own outer recover() to detect
re-raise; NoOpWhenNoPanic and PersistFailureLogged didn't. If a future
regression makes logProvisionPanic re-raise, those two would have
crashed the test process (taking sibling tests down) instead of
reporting a clean failure. Now all three use the shared guard.
2. Concurrent repro now asserts bcast.count == 7 — the new
concurrentSafeBroadcaster's count field was added in the race fix
but not actually consumed. Cross-checks the existing recorder-set
assertion from a different angle: a goroutine could in principle
reach cpProv.Start (recorder hits) but then lose its
WORKSPACE_PROVISION_FAILED broadcast on the failure path. Pinning
both rules out that silent-drop variant for the canvas-broadcast
contract specifically.
3. Comment on captureLog noting log.SetOutput is process-global and
incompatible with t.Parallel() — preempts a future footgun if
someone parallelizes the panic suite.
Verified: all four tests pass under -race; full handlers + db packages
green under -race.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
CI Platform (Go) ran with -race and the concurrent test tripped the
detector: captureBroadcaster (sequential-test stub) writes lastData
unguarded; 7 fan-out goroutines call markProvisionFailed → that stub
concurrently. Local non-race run had hidden it.
Introduce concurrentSafeBroadcaster (mutex-counted) for this single
fan-out test. Sequential tests keep using captureBroadcaster — the
fix is local to the test that creates the goroutines.
Verified ./internal/handlers passes with -race.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Three fixes addressing review of the issue #2486 observability PR:
1. CI failure: original inline UPDATE in logProvisionPanic used a hard-coded
`status='failed'` literal, which trips workspace_status_enum_drift_test
(the post-PR-#2396 gate that requires every status write to flow through
models.Status* via parameterized $N). Refactor to call
h.markProvisionFailed which uses StatusFailed parameterized.
2. Canvas-broadcast gap (review finding): inline UPDATE skipped
RecordAndBroadcast, so panic recovery marked the row failed in DB but
the canvas spinner stayed on "provisioning" until the next poll.
markProvisionFailed fires WORKSPACE_PROVISION_FAILED, so canvas now
flips to a failure card immediately.
3. Critical test bug (review finding): `defer log.SetOutput(log.Writer())`
in three test sites evaluated log.Writer() at defer-fire time AFTER the
SetOutput swap — restoring the buffer to itself, never restoring
os.Stderr. Subsequent tests in the package were running with the panic
tests' captured buffer as their writer. Extracted captureLog(t) helper
that captures `prev` BEFORE the swap and uses t.Cleanup.
Plus: softened the "goroutine never started" comment in the concurrent
repro harness — the harness atomic-counts BEFORE the entry log fires, so
"never started" was misleading; the real failure mode is "entry log
renamed/removed or writer hijacked."
Verified: full handlers suite passes; drift gate passes (Platform Go CI
failure root-caused). Regression-injected the recover body again — both
panic tests still fail as expected, confirming the contract is gated.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Goal: a deterministic, in-process reproduction of the prod incident
where 7 simultaneous claude-code provisions on the hongming tenant
produced ZERO log lines from any of the four documented exit paths.
Approach: stub CPProvisioner that records every Start() call,
sqlmock for the prepare flow, fire 7 goroutines concurrently against
provisionWorkspaceCP, then assert:
1. Entry log fired exactly 7 times (one per goroutine).
2. Stub Start() recorded all 7 distinct workspace IDs.
3. Each goroutine's entry log names its own workspace ID.
Result on staging head as of 2026-05-02: PASSES — meaning the
silent-drop class isn't reproducible against current head with stub
CP. Tenant hongming runs sha 76c604fb (725 commits behind staging),
so the bug is most likely already fixed upstream — hongming needs
a redeploy.
The test stays as a regression gate: any future refactor that
re-introduces silent goroutine swallow in the CP provision path
(rate-limit drop, channel-send-without-receiver, panic without
recover, etc.) trips it.
A safeWriter wraps the captured log buffer because raw
bytes.Buffer.Write isn't safe for concurrent goroutines — without
serialization the 7 entry-log lines interleave at byte boundaries
and the strings.Count assertion gets unreliable.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Issue #2486: 7 claude-code workspaces stuck in provisioning produced
NONE of the four documented exit-path log lines in
provisionWorkspaceCP — neither prepare-failed, nor start-failed, nor
persist-instance-id-failed, nor success. Operators couldn't tell
whether the goroutine ran at all.
Add an entry log at the top of provisionWorkspaceOpts +
provisionWorkspaceCP so a missing entry distinguishes "goroutine
never started" from "started but exited via an unlogged path."
Add logProvisionPanic at the same defer site so a panic inside
either provisioner doesn't (a) crash the whole workspace-server
process, taking every other tenant workspace with it, and (b)
silently leave the row in `provisioning` until the 10-min sweeper
fires. The recover persists status='failed' with a sanitized
panic-class message via a fresh 10s context (the goroutine's own
ctx may have been the one panicking).
Tests pin three contracts:
- no-op when no panic (otherwise every successful provision
emits a spurious log line)
- recovers + persists failed status on panic, with stack trace
- defense-in-depth: if the persist itself fails, log it instead
of leaving the operator with a recovered-panic log but no row
Regression-injected by neutering the recover() body — all three
tests fail until the recover + UPDATE path is restored.
This is observability + resilience only, not a root-cause fix
for #2486. The actual silent-drop class still needs reproduction
once the tenant is on a build that includes this entry log.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The wheel-side chat_history MCP tool advertises a `before_ts`
parameter for backward paging through long histories, and the docs
describe it as the canonical pagination knob — but the server
silently ignored it until now. Without this fix, an agent passing
before_ts to chat_history would always get the most-recent N rows
and pagination would be broken end-to-end.
Add `before_ts` query param parsed as RFC3339 at the trust boundary
and translated into a `created_at < $X` clause on the existing
builder. Mirrors the strict-inequality shape since_id uses for
forward paging (`created_at > cursorTime`) so paging across both
directions has consistent semantics.
Tests: 3 new branches (positive filter, composition with peer_id
into the canonical chat_history paging shape, RFC3339 rejection
across 4 malformed inputs including URL-encoded SQL injection).
Mutation-verified pre-commit; existing 9 activity tests still pass.
Reported by self-review on PR #2474.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Surfaces the conversation history with one specific peer for the
wheel-side chat_history MCP tool. The filter joins
(source_id = $X OR target_id = $X) so both inbound (peer was sender)
and outbound (peer was recipient) turns appear in the same view,
ordered by created_at, and composes with existing type/source/
since_secs/since_id/limit filters.
Validates peer_id as a UUID at the trust boundary so a malformed
caller can't smuggle SQL fragments via the parameter — the args are
bound but the explicit rejection gives the wheel a cleaner 400
signal than an empty list, and defends against any future code path
that might interpolate the value into a URL or another query.
Tests: 3 new branches (positive filter, composition with
type+source, UUID-shape rejection across 5 malformed inputs).
Mutation-verified: reverting activity.go fails all peer_id tests.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Option B PR-5. Canvas Config tab now exposes a Provider override input
that's adapter-driven from each runtime's template — no hardcoded
provider list in the canvas. PUT /workspaces/:id/provider on Save
when dirty; auto-restart suppression to avoid double-restart with
the model handler's own restart.
The dropdown's suggestion list comes from /templates →
runtime_config.providers (the field added in
molecule-ai-workspace-template-hermes PR #31). For templates that
haven't migrated to the explicit providers list yet, suggestions
derive from model[].id slug prefixes — still adapter-driven, just
inferred. This keeps existing templates working while platform team
migrates them one at a time.
workspace-server changes:
- Add Providers []string field to templateSummary JSON
- Parse runtime_config.providers in /templates handler
- 2 new tests pin the surfacing + omitempty behavior
canvas changes:
- Remove hardcoded PROVIDER_SUGGESTIONS constant
- Add provider/originalProvider state + PUT-on-save logic
- Add deriveProvidersFromModels() fallback helper
- Wire RuntimeOption.providers from /templates response
- 8 new tests pin the behavior end-to-end
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Mirror of PUT /model. Stores the provider slug as the LLM_PROVIDER
workspace secret so the canvas can update model + provider
independently — a user might keep the same model alias and switch
providers (route through a different gateway), or vice versa.
Forcing both into one endpoint imposes a single Save+Restart per
change; two endpoints let canvas update each as the user picks.
Plumbs through the existing chain: secret-load → envVars → CP
req.Env → user-data env exports → /configs/config.yaml (after
controlplane PR #364 lands the heredoc append).
Tests: 5 new cases mirroring SetModel/GetModel exactly — default
empty response, DB error, upsert with restart trigger, empty-clears,
invalid-UUID rejection.
Part of: Option B PR-2 (#196) — workspace-server plumbs LLM_PROVIDER
Stack: PR-1 schema (#2441 merged)
PR-2 (this) ws-server endpoint
PR-3 (#364 open) CP user-data persistence
PR-4 (pending) hermes adapter consume
PR-5 (pending) canvas Provider dropdown
#2429 review finding. The 410-Gone path issues a follow-up
`SELECT updated_at` after detecting status='removed'. If that query
fails (workspace row deleted between the two queries, transient DB
error, etc.), `removedAt` stays as Go's zero time and the JSON body
emits `"removed_at": "0001-01-01T00:00:00Z"` — a misleading timestamp
the client has to know to ignore.
Now we branch on `removedAt.IsZero()` and emit `null` for the failed
path. The actionable signal (the 410 + hint) is unchanged; only the
timestamp shape gets cleaner.
Pinned by `TestWorkspaceGet_RemovedReturns410WithNullRemovedAtOnTimestampFetchFailure`,
which simulates the row vanishing via `sqlmock`'s `WillReturnError(sql.ErrNoRows)`.
The original `_RemovedReturns410` test now also asserts that the
happy-path timestamp is a non-null value (was just checking the key
existed).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Defense-in-depth at the endpoint level. Previously, GET /workspaces/:id
returned 200 OK with `status:"removed"` in the body for deleted
workspaces — silent-fail UX hit on the hongmingwang tenant 2026-04-30:
the channel bridge / molecule-mcp wheel had a dead workspace_id + token
in .env, get_workspace_info returned 200 → caller assumed everything
was fine, then every subsequent /registry/* call 401d because tokens
were revoked, and operators had no idea their workspace was gone.
#2425 fixed the steady-state heartbeat path (escalate to ERROR after
3 consecutive 401s). This change is the startup-time defense — fail
loud when the operator first probes the workspace instead of waiting
for the heartbeat to sour.
The 410 body includes:
{error: "workspace removed", id, removed_at, hint: "Regenerate ..."}
Audit-trail consumers that need the body shape of a removed workspace
(admin views, "show me deleted workspaces" tooling) opt into the
legacy 200 + body via ?include_removed=true. Without this opt-in path
the audit trail becomes invisible at the API layer.
Two new tests pinned:
- TestWorkspaceGet_RemovedReturns410
- TestWorkspaceGet_RemovedWithIncludeQueryReturns200
Follow-ups in separate PRs:
- Update workspace/a2a_client.py get_workspace_info to surface
"removed" specifically rather than collapsing into "not found"
- Update channel bridge getWorkspaceInfo (server.ts) to detect 410
→ log clear "workspace was deleted, re-onboard" error
- Audit canvas/* + admin tooling consumers that may rely on the
legacy 200 + status:"removed" shape; switch them to the
?include_removed=true opt-in if needed
- Update docs (runtime-mcp.mdx Troubleshooting + external-agents.mdx
lifecycle table)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two fixes from /code-review-and-quality on PR #2445:
1. **KI-005 hierarchy check parity with /terminal**
HandleConnect runs the KI-005 cross-workspace guard before dispatch
(terminal.go:85-106): when X-Workspace-ID is set and != :id, validate
the bearer's workspace binding then call canCommunicateCheck. Without
this, an org-level token holder in tenant Foo can probe any
workspace's diagnostic state by guessing the UUID — same enumeration
vector KI-005 closed for /terminal in #1609. Per-workspace bearer
tokens are URL-bound by WorkspaceAuth, so the gap is org tokens
within the same tenant.
Fix: copy the same gate into HandleDiagnose, before the
instance_id SELECT.
Test: TestHandleDiagnose_KI005_RejectsCrossWorkspace stubs
canCommunicateCheck=false and confirms 403 fires before the DB
lookup (sqlmock's ExpectationsWereMet pins that we never reached
the SELECT COALESCE). Mirrors the existing
TestTerminalConnect_KI005_RejectsUnauthorizedCrossWorkspace.
2. **Race-free tunnel stderr capture (syncBuf)**
strings.Builder isn't goroutine-safe. os/exec spawns a background
goroutine that copies the subprocess's stderr fd to cmd.Stderr's
Write, so reading the buffer's String() from the request goroutine
on wait-for-port timeout while the tunnel may still be writing is
a data race that `go test -race` flags. Worst-case impact in
production is a garbled Detail string (not a crash), but the fix
is small.
Fix: wrap bytes.Buffer in a sync.Mutex (syncBuf type). Same
io.Writer interface, no API changes elsewhere.
3. **Nit cleanup**
- read-pubkey failure now reports as its own step name instead of
a duplicated "ssh-keygen" entry — disambiguates two different
failure modes that previously shared a name.
- Replaced numToString hand-rolled int-to-string with strconv.Itoa
in the test (no import savings reason existed).
Suite: 4 diagnose tests pass with -race; full handlers suite passes
in 3.95s. go vet clean.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
GET /workspaces/:id/terminal/diagnose runs the same per-stage pipeline as
/terminal (ssh-keygen → EIC send-key → tunnel → ssh) but non-interactively
and returns JSON. Each stage reports {name, ok, duration_ms, error,
detail}, plus a top-level first_failure naming the broken stage.
Why: when the canvas terminal silently disconnects ("Session ended" with
no error frame — the user-reported failure mode on hongmingwang's hermes
workspace), there is no remote-readable signal of WHICH stage failed.
The ssh client's stderr lives only in the workspace-server's stdout on
the tenant CP EC2 — invisible without shell access. /terminal can't
expose stderr cleanly because it has already upgraded to WebSocket
binary frames by the time ssh runs. /terminal/diagnose stays pure
HTTP/JSON, so the same auth (WorkspaceAuth + ADMIN_TOKEN fallback) gives
operators a one-call probe that splits "IAM broke" (send-ssh-public-key
fails) from "tunnel/SG broke" (wait-for-port fails) from "sshd auth
broke" (ssh-probe gets Permission denied) from "shell broke" (probe
exits non-zero with stderr).
Stages mirrored from handleRemoteConnect in terminal.go:
1. ssh-keygen ephemeral session keypair
2. send-ssh-public-key AWS EIC API push, IAM-gated
3. pick-free-port local port for the tunnel
4. open-tunnel aws ec2-instance-connect open-tunnel start
5. wait-for-port the tunnel actually listens (folds tunnel
stderr into Detail when it doesn't)
6. ssh-probe non-interactive `ssh ... 'echo MARKER'` that
confirms auth + bash + the marker round-trip
(CombinedOutput captures stderr verbatim —
this is the whole reason the endpoint exists)
Local Docker workspaces (no instance_id) get a smaller probe:
container-found + container-running. Same response shape so callers
don't need to branch.
Tests stub sendSSHPublicKey / openTunnelCmd / sshProbeCmd via the
existing package-level vars (same pattern as TestSSHCommandCmd_*) so
the test suite stays hermetic — no AWS, no network. The three new
tests pin: (a) routing to remote on instance_id present,
(b) routing to local on empty instance_id, (c) the operationally
critical case — full success through wait-for-port then a probe
failure surfaces ssh stderr in the ssh-probe step's Error/Detail
with first_failure="ssh-probe".
Auth: rides on existing WorkspaceAuth middleware. Operators with the
tenant ADMIN_TOKEN (fetched via /cp/admin/orgs/:slug/admin-token) can
probe any workspace without per-workspace token; same admin path as
the canvas dashboard reads workspace activity.
Response always returns HTTP 200 (success or step failure are both in
the JSON body) so callers don't need to branch on status code — the
endpoint either reports a first_failure or doesn't.
Resolves task #200, supports task #193 (workspace EC2 sshd
unresponsive — without this endpoint we couldn't pin the failure
stage from outside the tenant CP EC2).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The pre-existing TestSSHCommandCmd_BuildsArgv asserts the literal argv
slice. Adding `-o ConnectTimeout=10` shifted the slice — this commit
tracks the snapshot to match. The new behavior-based
TestSSHCommandCmd_ConnectTimeoutPresent (added in the prior commit)
keeps the invariant pinned without depending on argv ordering, so
future tweaks land in only one place even if more options are added.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
When the workspace EC2's sshd is unresponsive (mid-restart, SG drop,
AMI without ec2-instance-connect), the canvas's xterm shows the user's
typed bytes echoed back by the workspace-server's *local* PTY (cooked +
echo mode before ssh sets it raw post-handshake) and then closes
silently when Cloudflare's idle WebSocket timer fires (~100s) — with no
"Connection refused" or "Permission denied" output ever reaching the
user. This is what hongmingwang's hermes terminal looked like 2026-04-30
right after the heartbeat-fix redeploy: status="online" but the shell
appeared dead.
Caught reproducibly by holding a fresh /workspaces/<id>/terminal
WebSocket open for 60s — server sent zero frames except the local-PTY
echo of one keystroke typed at t=8s. ssh was hung at handshake; bash
never saw the byte.
Fix: add `-o ConnectTimeout=10` to ssh args. Now the failure surfaces
as a real ssh error message in the terminal within 10s, instead of
masquerading as a silently dead shell over the next ~100s. Doesn't
diagnose *why* sshd isn't responding (separate investigation), but
it does mean the user gets actionable feedback within seconds.
Behavior-based regression test asserts `-o ConnectTimeout=N` is in the
ssh argv — pins presence, not the literal value, so operators can tune
without breaking the gate. Verified to FAIL on pre-fix code (matched
the literal arg pair) and PASS on fix.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Pre-fix Restart called provisioner.Stop / cpProv.Stop synchronously before
returning the HTTP response. CPProvisioner.Stop is DELETE /cp/workspaces/:id
→ CP → AWS EC2 terminate, which can exceed the canvas's 15s HTTP timeout,
especially right after a platform-wide redeploy when every tenant queues a
CP request at once. The user sees a misleading "signal timed out" red banner
on Save & Restart even though the async re-provision goroutine continues
and the workspace ends up online.
Caught 2026-04-30 on hongmingwang hermes workspace 32993ee7-…cb9d75d112a5
right after the heartbeat-fix platform redeploy at 02:11Z. The workspace
came back online correctly; only the canvas response timed out.
Fix moves Stop into the same goroutine as provisionWorkspaceCP /
provisionWorkspaceOpts. The handler now responds in <500ms (DB lookup +
status UPDATE only). Stop and provision keep their existing ordering
inside the goroutine. Uses context.Background() to detach from the request
lifecycle so an aborted client connection doesn't cancel the in-flight
Stop/provision pair.
Pinned by a behavior-based AST gate (workspace_restart_async_test.go):
the test parses workspace_restart.go and walks the Restart function body,
flagging any <recv>.{provisioner,cpProv}.Stop call that isn't nested in a
*ast.FuncLit. Same family as callsProvisionStart in
workspace_provision_shared_test.go. Verified the gate fails on the
pre-fix shape (flags lines 151 and 153 — the original sync Stop calls).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Heartbeat now echoes the workspace's platform_inbound_secret on every
beat (mirroring /registry/register), and the molecule-mcp client
persists it to /configs/.platform_inbound_secret on receipt.
Symptom (2026-04-30, hongmingwang tenant): chat upload returned 503
"workspace will pick it up on its next heartbeat" and then 401 on
retry — permanent until workspace restart. The 503 message was a lie:
heartbeat used to discard the platform_inbound_secret entirely; only
register delivered it, and register fires once at startup.
Server (Go):
- Heartbeat handler reuses readOrLazyHealInboundSecret (the same
helper chat_files + register use), so heartbeat-time recovery
covers the rotate / mid-life NULL-column case the existing
register-time heal can't reach.
- Failure is non-fatal: liveness contract trumps secret delivery,
chat_files retries lazy-heal on its own next request.
Client (Python):
- _persist_inbound_secret_from_heartbeat parses the heartbeat 200
response and persists via platform_inbound_auth.save_inbound_secret.
- All exceptions swallowed — heartbeat liveness > secret persistence;
next tick (≤20s) retries.
Tests:
- Server: pin secret-present, lazy-heal-mint-on-NULL, and heal-
failure-omits-field branches.
- Client: pin persist-on-200, skip-on-empty, skip-on-non-dict-body,
skip-on-401, swallow-save-OSError.
The canvas tab snippet for the Universal MCP path was written before
this PR added the built-in register + heartbeat thread. Earlier wording
described it as "outbound-only — pair with the Claude Code or Python SDK
tab for heartbeat + inbound messages" — that's stale. molecule-mcp now
handles register + heartbeat itself; the only thing it doesn't yet do is
inbound A2A delivery.
Updated:
- externalUniversalMcpTemplate header comment + body — describes
standalone behavior, points operators at SDK/channel only when they
need INBOUND (not heartbeat).
- Drops the now-redundant curl-register step from the snippet — the
binary registers itself on startup.
- Canvas modal label likewise updated.
No runtime / behavior change; pure docs polish so a copy-pasting
operator's mental model matches what the binary actually does.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two paired fixes that together let an external operator run a single
process (molecule-mcp) and see their workspace come up online in the
canvas — the bug surfaced live when status stuck at "awaiting_agent /
OFFLINE" despite an active MCP server.
Platform side (workspace-server/internal/handlers/registry.go):
Heartbeat handler already auto-recovers offline → online and
provisioning → online, but NOT awaiting_agent → online. Healthsweep
flips stale-heartbeat external workspaces TO awaiting_agent, and
with no recovery path the workspace stays "OFFLINE — Restart" in the
canvas forever. Add the symmetric branch: if currentStatus ==
"awaiting_agent" and a heartbeat arrives, flip to online + broadcast
WORKSPACE_ONLINE. Mirrors the existing offline/provisioning patterns
exactly. Test: TestHeartbeatHandler_AwaitingAgentToOnline asserts
the SQL UPDATE fires with the awaiting_agent guard clause.
Wheel side (workspace/mcp_cli.py):
molecule-mcp was outbound-only — operators had to run a separate
SDK process to register + heartbeat. Now mcp_cli.main():
1. Calls /registry/register at startup (idempotent upsert flips
status awaiting_agent → online via the existing register path).
2. Spawns a daemon thread that POSTs /registry/heartbeat every
20s. 20s is comfortably under the healthsweep stale window so
a single missed beat doesn't cause status churn.
3. Runs the MCP stdio loop in the foreground.
Both calls set Origin: ${PLATFORM_URL} so the SaaS edge WAF accepts
them. Threaded heartbeat (not asyncio) chosen because it doesn't
need to share an event loop with the MCP stdio server — daemon=True
cleanly dies when the operator's runtime exits.
MOLECULE_MCP_DISABLE_HEARTBEAT=1 escape hatch lets in-container
callers (which have heartbeat.py running already) reuse the entry
point without double-heartbeating. Default is enabled.
End-to-end verification (live, against
hongmingwang.moleculesai.app, workspace 8dad3e29-...):
pre-fix: status=awaiting_agent → canvas shows OFFLINE forever
post-fix: ran `molecule-mcp` for 5s standalone → canvas state:
status=online runtime=external agent=molecule-mcp-8dad3e29
Test coverage: 7 new mcp_cli tests (register-at-startup, heartbeat-
thread-spawned, disable-env-skips-both, env-and-file token resolution,
register payload shape, heartbeat endpoint + headers); 1 new platform
test (awaiting_agent → online recovery). Full workspace + handlers
suites green: 1355 Python, full Go handlers passing.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The "Connect your external agent" dialog already covered Claude Code,
Python SDK, curl, and raw fields. This adds a Universal MCP tab that
documents the new \`molecule-mcp\` console script — the runtime-
agnostic baseline shipped by PR #2413's workspace-runtime changes.
Surface area:
- New \`externalUniversalMcpTemplate\` constant in workspace-server.
Three-step snippet: pip install runtime → one-shot register via curl
→ wire molecule-mcp into agent's MCP config (Claude Code example,
notes that hermes/codex/etc. take the same env-var contract).
- Workspace create response now includes \`universal_mcp_snippet\`
alongside the existing curl/python/channel snippets.
- Canvas modal renders the tab when \`universal_mcp_snippet\` is
present; backward-compatible with older platform builds (tab hides
when empty).
Origin/WAF coverage (the user explicitly asked for this):
- The runtime wheel handles Origin automatically (this PR's earlier
commit on platform_auth.auth_headers).
- The curl tab now sets \`Origin: {{PLATFORM_URL}}\` preemptively
with an explanatory comment; \`/registry/register\` is currently
WAF-allowed without it but adding now keeps the snippet working
if WAF rules expand. The comment also explains why
\`/workspaces/*\` paths return empty 404 without Origin — the
exact failure mode I hit while smoke-testing this PR live.
- The MCP snippet's footer notes that the wheel auto-handles
Origin so operators don't think about it.
End-to-end verification (against live tenant
hongmingwang.moleculesai.app, freshly registered workspace):
- get_workspace_info → full JSON
- list_peers → "Claude Code Agent (ID: 97ac32e9..., status: online)"
- recall_memory → "No memories found."
all returned by the molecule-mcp binary speaking MCP stdio to
this Claude Code session.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
POST /workspaces/:id/restart on a runtime=external workspace ran the full
re-provision pipeline (Stop → provisionWorkspace*), which calls
issueAndInjectToken → RevokeAllForWorkspace. For external workspaces
(operator-driven, no container/EC2) that silently destroyed the operator's
local bearer token on every "Restart" click in the canvas — the local
poller would then 401-spam against /activity until the operator manually
regenerated from the Tokens tab.
The auto-restart path (runRestartCycle, line 436) already short-circuits
runtime=external. This patch mirrors that for the manual handler so the
two paths agree, and surfaces a 200 OK with a clear message so the
canvas can tell the operator the fix is on their side rather than
silently no-op'ing.
Test coverage: TestRestartHandler_ExternalRuntimeNoOps asserts the
short-circuit fires *before* any DB write or provision call. sqlmock's
"unexpected query" failure mode would catch a regression that
re-introduced the token revoke or the status=provisioning UPDATE.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The /buildinfo route added in #2398 to verify each tenant runs the
published SHA was 404'd by TenantGuard on every production tenant —
the allowlist had /health, /metrics, /registry/register,
/registry/heartbeat, but not /buildinfo. The redeploy workflows
curl /buildinfo from a CI runner with no X-Molecule-Org-Id header,
TenantGuard 404'd them, gin's NoRoute proxied to canvas, canvas
returned its HTML 404 page, jq read empty git_sha, and the verifier
silently soft-warned every tenant as "unreachable" — which the
workflow doesn't fail on.
Confirmed externally:
curl https://hongmingwang.moleculesai.app/buildinfo
→ HTTP 404 + Content-Type: text/html (Next.js "404: This page
could not be found.") even though /health on the same host
returns {"status":"ok"} from gin.
The buildinfo package's own doc already declares /buildinfo public
by design ("Public is intentional: it's a build identifier, not
operational state. The same string is already published as
org.opencontainers.image.revision on the container image, so no new
info is exposed.") — the allowlist just missed it.
Pin the alignment in tenant_guard_test.go:
TestTenantGuard_AllowlistBypassesCheck now asserts /buildinfo
returns 200 without an org header alongside /health and /metrics,
so a future allowlist edit can't silently regress the verifier
again.
Closes the silent-success failure mode: stale tenants will now
show up as STALE (hard-fail) rather than UNREACHABLE (soft-warn).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Pre-fix, cmd/server/main.go gated the entire health-sweep goroutine on
`prov != nil`. On SaaS tenants (`MOLECULE_ORG_ID` set) the local Docker
provisioner is never initialized — only `cpProv`. So the goroutine
never started, and `sweepStaleRemoteWorkspaces` (which transitions
runtime='external' workspaces from 'online' to 'awaiting_agent' when
their last_heartbeat_at goes stale) never ran.
Net effect on production: every external-runtime workspace on SaaS
that lost its agent stayed 'online' indefinitely instead of falling
back to 'awaiting_agent' (re-registrable). The drift gate (#2388)
caught the migration side and #2382 fixed the SQL writes, but this
orchestration-side gate slipped through both because there was no
SaaS-mode E2E coverage on the heartbeat-loss → awaiting_agent
transition.
Caught by #2392 (live staging external-runtime regression E2E)
failing at step 6 — 180s with no heartbeat, expected
status=awaiting_agent, got online.
Fix: drop the `if prov != nil` gate. `StartHealthSweep` already
handles nil checker correctly (healthsweep.go:50-71): the Docker
sweep is gated inside the loop, the remote sweep always runs. Test
coverage already exists at TestStartHealthSweep_NilCheckerRunsRemoteSweep.
After this lands and tenants redeploy, #2392 step 6 passes and the
regression coverage closes.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the gap that let issue #2395 ship: redeploy-fleet workflows reported
ssm_status=Success based on SSM RPC return code alone, while EC2 tenants
silently kept serving the previous :latest digest because docker compose up
without an explicit pull is a no-op when the local tag already exists.
Wire:
- new buildinfo package exposes GitSHA, set at link time via -ldflags from
the GIT_SHA build-arg (default "dev" so test runs without ldflags fail
closed against an unset deploy)
- router exposes GET /buildinfo returning {git_sha} — public, no auth,
cheap enough to curl from CI for every tenant
- both Dockerfiles thread GIT_SHA into the Go build
- publish-workspace-server-image.yml passes GIT_SHA=github.sha for both
images
- redeploy-tenants-on-main.yml + redeploy-tenants-on-staging.yml curl each
tenant's /buildinfo after the redeploy SSM RPC and fail the workflow on
digest mismatch; staging treats both :latest and :staging-latest as
moving tags; verification is skipped only when an operator pinned a
specific tag via workflow_dispatch
Tests:
- TestGitSHA_DefaultDevSentinel pins the dev default
- TestBuildInfoEndpoint_ReturnsGitSHA pins the wire shape that the
workflow's jq lookup depends on
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two related fixes after self-review of #2396:
1. workspace_bootstrap.go:62 — `SET status = 'failed'` was missed in the
initial sweep. Now parameterized as $3 with models.StatusFailed.
Test fixed with the additional WithArgs sentinel.
2. Drift gate now scans production .go AST for hard-coded
`UPDATE workspaces … SET status = '<literal>'` and fails with
file:line. This catches the kind of miss the first commit just
fixed — the original migration-vs-codebase axis only verified
AllWorkspaceStatuses ⊆ enum, not "no raw literals in writes."
Verified the gate fires: dropped a synthetic 'failed' literal into
internal/handlers/_drift_sanity.go and confirmed the gate flagged
"internal/handlers/_drift_sanity.go:6 → SET status = 'failed'".
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Eliminate raw 'awaiting_agent'/'hibernating'/'failed'/etc string literals
from production status writes. Adds models.WorkspaceStatus typed alias and
models.AllWorkspaceStatuses canonical slice; every UPDATE workspaces SET
status = ... now passes a parameterized $N typed value rather than a
hard-coded SQL literal.
Defense-in-depth follow-up to migration 046 (#2388): the Postgres enum
type was missing 'awaiting_agent' + 'hibernating' for ~5 days because
sqlmock regex matching cannot enforce live enum constraints. The drift
gate is now a proper Go AST + SQL parser (no regex), asserting the
codebase ⊆ migration enum and every const appears in the canonical
slice. With status as a parameterized typed value, future enum mismatches
fail at the SQL layer in tests, not silently in prod.
Test coverage: full suite passes with -race; drift gate green.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Symmetric with the existing CPProvisionerAPI interface. Closes the
asymmetry where the SaaS provisioner field was an interface (mockable
in tests) but the Docker provisioner field was a concrete pointer
(not).
## Changes
- New ``provisioner.LocalProvisionerAPI`` interface — the 7 methods
WorkspaceHandler / TeamHandler call on h.provisioner today: Start,
Stop, IsRunning, ExecRead, RemoveVolume, VolumeHasFile,
WriteAuthTokenToVolume. Compile-time assertion confirms *Provisioner
satisfies it. Mirror of cp_provisioner.go's CPProvisionerAPI block.
- ``WorkspaceHandler.provisioner`` and ``TeamHandler.provisioner``
re-typed from ``*provisioner.Provisioner`` to
``provisioner.LocalProvisionerAPI``. Constructor parameter type is
unchanged — the assignment widens to the interface, so the 200+
callers of ``NewWorkspaceHandler`` / ``NewTeamHandler`` are
unaffected.
- Constructors gain a ``if p != nil`` guard before assigning to the
interface field. Without this, ``NewWorkspaceHandler(..., nil, ...)``
(the test fixture pattern across 200+ tests) yields a typed-nil
interface value where ``h.provisioner != nil`` evaluates *true*,
and the SaaS-vs-Docker fork incorrectly routes nil-fixture tests
into the Docker code path. Documented inline with reference to
the Go FAQ.
- Hardened the 5 Provisioner methods that lacked nil-receiver guards
(Start, ExecRead, WriteAuthTokenToVolume, RemoveVolume,
VolumeHasFile) — return ErrNoBackend on nil receiver instead of
panicking on p.cli dereference. Symmetric with Stop/IsRunning
(already hardened in #1813). Defensive cleanup so a future caller
that bypasses the constructor's nil-elision still degrades
cleanly.
- Extended TestZeroValuedBackends_NoPanic with 5 new sub-tests
covering the newly-hardened nil-receiver paths. Defense-in-depth:
a future refactor that drops one of the nil-checks fails red here
before reaching production.
## Why now
- Provisioner orchestration has been touched in #2366 / #2368 — the
interface symmetry is the natural follow-up captured in #2369.
- Future work (CP fleet redeploy endpoint, multi-backend
provisioners) wants this in place. Memory note
``project_provisioner_abstraction.md`` calls out pluggable
backends as a north-star.
- Memory note ``feedback_long_term_robust_automated.md`` —
compile-time gates + ErrNoBackend symmetry > runtime panics.
## Verification
- ``go build ./...`` clean.
- ``go test ./...`` clean — 1300+ tests pass, including the
previously-flaky Create-with-nil-provisioner paths that now
exercise the constructor's nil-elision correctly.
- ``go test ./internal/provisioner/ -run TestZeroValuedBackends_NoPanic
-v`` — all 11 nil-receiver subtests green (was 6, +5 for the
newly-hardened methods).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Migration 043 (2026-04-25) introduced the workspace_status enum but
omitted two values application code had been writing for days, so every
UPDATE that tried to write either value failed silently in production:
'awaiting_agent' (since 2026-04-24, commit 1e8b5e01):
- handlers/workspace.go:333 — external workspace pre-register
- handlers/registry.go (via PR #2382) — liveness offline transition
- registry/healthsweep.go (via PR #2382) — heartbeat-staleness sweep
'hibernating' (since hibernation feature shipped):
- handlers/workspace_restart.go:271 — DB-level claim before stop
All four/five sites swallowed the enum-cast error. User-visible impact:
external workspaces never transition to a stale state when their agent
disconnects (canvas shows them stuck on 'online'/'degraded' indefinitely),
new external workspaces never advance past 'provisioning', and idle
workspaces never auto-hibernate (resources held forever).
PR #2382 didn't *cause* this — it inherited the gap and added two more
silent-fail paths on top. The pre-existing two had been broken for five
days and went unnoticed because:
1. sqlmock matches SQL by regex, not against the live enum constraint.
Every test passed despite the prod-only failure.
2. The handlers either drop the Exec error entirely (workspace.go:333)
or log+continue without an alert (the other three).
Fix in three pieces:
1. migrations/046_*.up.sql — ALTER TYPE workspace_status ADD VALUE
'awaiting_agent', 'hibernating'. IF NOT EXISTS makes it idempotent
across re-runs (RunMigrations re-applies until schema_migrations
records the file). ALTER TYPE ADD VALUE doesn't take a heavy lock
and commits immediately, safe under live traffic.
2. migrations/046_*.down.sql — full rename → recreate → cast → drop
recipe. Postgres has no DROP VALUE so this is the only honest
rollback. Pre-flights existing rows to compatible values
(awaiting_agent → offline, hibernating → hibernated) before the
type swap.
3. internal/db/workspace_status_enum_drift_test.go — static gate that
parses every UPDATE/INSERT against `workspaces` in workspace-server/
internal/, extracts every status literal, and asserts each is in
the enum union (CREATE TYPE + every ALTER TYPE ADD VALUE). The gate
runs in unit tests, no DB required, and would have caught both
omissions on the day they shipped. Pattern matches
feedback_behavior_based_ast_gates and feedback_mock_at_drifting_layer.
Verification:
- go test ./internal/db/ -count=1 -race ✓
- go vet ./... ✓
- Drift gate flips red if I delete either ADD VALUE from the migration
(validated via local mutation).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The "do request → check err → defer close → forward headers → set
status → io.Copy → log mid-stream errors" tail was duplicated between
Upload and Download. Each handler had ~12 lines that differed only in:
- the op label in log messages ("upload" vs "download")
- the set of response headers to forward verbatim
(Upload: Content-Type only; Download: Content-Type +
Content-Length + Content-Disposition)
Hoist into ChatFilesHandler.streamWorkspaceResponse(c, op,
workspaceID, forwardURL, req, forwardHeaders). Each call site
reduces to one line. Future changes — request-id forwarding,
observability metric, response-size cap, bytes-streamed log —
go in ONE place rather than two.
Same drift-prevention rationale as resolveWorkspaceForwardCreds
(#2372) and readOrLazyHealInboundSecret (#2376), applied to the
response-streaming layer of the same handlers.
Behavior preserved: existing TestChatUpload_* and TestChatDownload_*
integration tests (8 across both handlers) all pass unchanged. The
log message format is consistent across both handlers now (single
"chat_files {op}: ..." string template) — operators can grep one
prefix for both features instead of separate prefixes per handler.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Paired molecule-core change for the molecule-cli `molecule connect`
RFC (https://github.com/Molecule-AI/molecule-cli/issues/10).
After this PR an `external`-runtime workspace's full lifecycle
matches the operator-driven model: it boots in awaiting_agent, the
CLI connects in poll mode without operator-side flag tuning, the
heartbeat-loss path lands back on awaiting_agent (re-registrable)
instead of the terminal-feeling 'offline'.
Two changes in workspace-server:
1) `resolveDeliveryMode` (registry.go) now reads `runtime` alongside
`delivery_mode`. Resolution order:
a. payload.delivery_mode if non-empty (operator override)
b. row's existing delivery_mode if non-empty (preserves prior
registration)
c. **NEW:** "poll" if row.runtime = "external" — external
operators run on laptops without public HTTPS; push-mode
would hard-fail at validateAgentURL anyway. (`molecule connect`
registers without --mode and expects this default.)
d. "push" otherwise (historical default for platform-managed
runtimes — langgraph, hermes, claude-code, etc.)
2) Heartbeat-loss for external workspaces lands them in
`awaiting_agent` instead of `offline`. Two code paths:
- `liveness.go` — Redis TTL expiration. Uses a CASE expression
so the conditional is one UPDATE (no extra round-trip for
non-external runtimes, no TOCTOU between runtime read and
status write).
- `healthsweep.go::sweepStaleRemoteWorkspaces` — DB-side
last_heartbeat_at age scan. This sweep is already external-
only by query filter, so the UPDATE just hard-codes the new
status.
The Docker-side `sweepOnlineWorkspaces` keeps `offline` —
recovery there is "restart the container", not "re-register from
the operator's box".
Why awaiting_agent over offline for external:
- Matches the status the workspace was created in (workspace.go:333).
- The CLI re-registers on every invocation; awaiting_agent → online
is the natural transition. offline is a terminal-feeling status
that implies operator intervention is needed.
- An operator who closed their laptop overnight should see
awaiting_agent in canvas, not 'offline (something is wrong)'.
Test plan:
- Existing: 9 `resolveDeliveryMode` test sites updated to the new
query shape. Sqlmock now reads `delivery_mode, runtime` columns.
- New: TestRegister_ExternalRuntime_DefaultsToPoll asserts the
external→poll branch. TestRegister_NonExternalRuntime_StillDefaultsToPush
guards against the new branch overshooting (langgraph keeps push).
- Liveness: regex updated to match the CASE expression.
- Healthsweep: `TestSweepStaleRemoteWorkspaces_MarksStaleAwaitingAgent`
(renamed for grep-ability), Docker-side sweepOnlineWorkspaces test
unchanged (verified to still match `'offline'`).
- Full handlers + registry suite green under -race (12.873s + 2.264s).
No migration needed — `status` is a free-form text column; both
'offline' and 'awaiting_agent' are existing values used elsewhere
(workspace.go uses awaiting_agent on initial external creation).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The helper landed in #2376 and is exercised via chat_files + registry
integration tests. Those tests conflate the helper's behavior with the
caller's response shape — a future refactor that broke the (secret,
healed, err) contract subtly (e.g. returning healed=true on a
read-success path, or swallowing a mint error) might still pass them.
Adds 4 direct sub-tests pinning each branch of the contract:
- secret already present → (s, false, nil)
- secret missing, mint succeeds → (minted, true, nil)
- secret missing, mint fails → ("", false, err)
- read fails (non-NoInboundSecret) → ("", false, err)
Each sub-case asserts the return tuple shape AND mock.ExpectationsWereMet
(for the success path) so a future helper change that skips a DB op
trips the gate immediately.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The lazy-heal-on-miss pattern landed in two places this session:
PR #2372 (chat_files.go::resolveWorkspaceForwardCreds — Upload + Download)
and PR #2375 (registry.go::Register). Both implementations did the same
thing:
read → if ErrNoInboundSecret then mint inline → return outcome
Different response-shape requirements but the same core mechanic. Three
sites' worth of drift potential: any future heal-time condition we add
(audit log, alert, secret rotation, observability) had to be applied to
each site, with partial application silently re-opening the gap.
Fix: extract readOrLazyHealInboundSecret in workspace_provision_shared.go
returning (secret, healed, err). Each caller maps the outcome to its
response shape:
- chat_files: healed=true → 503 with retry hint; err != nil → 503 with
RFC-#2312 reprovision hint
- registry: healed=true|false + err==nil → include in response;
err != nil → omit field (workspace can retry on next register)
Net effect:
- Single source of truth for the read+heal mechanic
- Response-shape decisions stay in callers (they DO differ per feature)
- Future heal-time conditions go in one place
- Behavior preserved: existing TestRegister_NoInboundSecret_LazyHeals,
TestRegister_NoInboundSecret_LazyHealMintFailureOmitsField,
TestChatUpload_NoInboundSecret_LazyHeal*,
TestChatDownload_NoInboundSecret_LazyHeal* all pass unchanged
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Pre-fix: a legacy SaaS workspace with NULL platform_inbound_secret
needed two round-trips before chat upload worked:
1. Workspace registers → response missing platform_inbound_secret
2. User attempts chat upload → chat_files lazy-heals platform-side
(RFC #2312 backfill) → 503 + retry-after
3. Workspace heartbeats → register response now includes the
freshly-minted secret → workspace writes /configs/.platform_inbound_secret
4. User retries chat upload → workspace bearer matches → 200
The platform-side lazy-heal in chat_files.go (#2366) closes the
existing-workspace gap, but the user-visible round-trip dance is
still ugly.
Fix: lazy-heal at register time too. When ReadPlatformInboundSecret
returns ErrNoInboundSecret, mint inline and include the freshly-
minted secret in the register response. Collapses the dance to a
single round-trip:
1. Workspace registers → response includes lazy-healed secret
2. User attempts chat upload → workspace bearer matches → 200
Failure model: best-effort. Mint failure logs and falls through to
omitting the field (workspace will retry on next register call).
The 200 response status is preserved — register success doesn't
hinge on the inbound-secret heal.
Tests:
- TestRegister_NoInboundSecret_LazyHeals: pins the success branch.
Mocks the UPDATE explicitly + asserts ExpectationsWereMet, so a
regression that skipped the mint would fail loudly. Replaces
the prior TestRegister_NoInboundSecret_OmitsField which
"passed" on this branch only because sqlmock-unmatched-UPDATE
coincidentally drove the omit-field error path.
- TestRegister_NoInboundSecret_LazyHealMintFailureOmitsField:
pins the failure branch — explicit UPDATE error → 200 + field
absent.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
ValidateToken, WorkspaceFromToken, and ValidateAnyToken each duplicated
the same JOIN+WHERE auth predicate:
FROM workspace_auth_tokens t
JOIN workspaces w ON w.id = t.workspace_id
WHERE t.token_hash = $1
AND t.revoked_at IS NULL
AND w.status != 'removed'
Same drift class as the SaaS provision-mint bug fixed in #2366. A
future safety addition (e.g. exclude paused workspaces from auth) had
to be applied to all three queries; a partial application would
silently re-open one auth path while closing the others.
Fix: hoist the predicate into lookupTokenByHash, which projects
(id, workspace_id) — the union of fields any caller needs. Each
public function picks what it uses:
- ValidateToken — needs both (compares workspaceID, updates last_used_at by id)
- WorkspaceFromToken — needs workspace_id
- ValidateAnyToken — needs id
The trivial perf cost of selecting one extra column per call is worth
the single-source-of-truth guarantee for the auth predicate.
Test mock updates: two upstream test files (a2a_proxy_test, middleware
wsauth_middleware_test{,_canvasorbearer_test}) had hand-typed regex
matchers and row shapes pinned to the per-function SELECT projection.
Updated to the unified shape; behavior is unchanged.
All wsauth + middleware + handlers + full-module tests green.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The admin test-token endpoint has a critical security check at
admin_test_token.go:64-72 — the IDOR fix from #112 that requires an
explicit ADMIN_TOKEN bearer when the env var is set. Pre-fix, the
route accepted ANY bearer that matched a live org token, allowing
cross-org test-token minting (and therefore cross-org workspace
authentication). The current code uses subtle.ConstantTimeCompare
against ADMIN_TOKEN.
Test coverage was zero. The existing tests exercised the
ADMIN_TOKEN-unset path (local dev / CI) but never set ADMIN_TOKEN.
A regression that:
- removed the os.Getenv("ADMIN_TOKEN") check
- inverted the comparison
- replaced ConstantTimeCompare with bytes.Equal (timing leak)
- re-introduced the AdminAuth fallback that allows org tokens
would not fail any test, and the breakage would re-open the IDOR
that #112 closed.
Adds four tests covering the gate matrix:
- ADMIN_TOKEN set + no Authorization header → 401
- ADMIN_TOKEN set + wrong Authorization → 401
- ADMIN_TOKEN set + correct Authorization → 200
- ADMIN_TOKEN unset + no Authorization → 200 (gate bypassed safely)
The 4-row matrix pins the gate's full truth table: any regression in
either dimension (gate enabled/disabled, header correct/wrong) trips
exactly one test.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The 50-line "resolve URL + read inbound secret + lazy-heal on miss"
block was duplicated nearly verbatim between Upload and Download
handlers. Drift-prone — same class of risk as the original SaaS
provision drift fixed in #2366. A future change like:
- secret rotation (re-mint when the row's older than X)
- per-feature audit logging
- additional fail-closed conditions
would have to be applied to both handlers, and a partial application
that healed Upload but skipped Download would surface only at runtime.
Fix: hoist the shared logic into resolveWorkspaceForwardCreds. The
function takes an op label ("upload"/"download") used in log messages
+ the 503 RFC-#2312 detail copy so operators can still distinguish
which feature ran. Both handlers reduce to:
wsURL, secret, ok := resolveWorkspaceForwardCreds(c, ctx, workspaceID, "upload")
if !ok { return }
Net -20 lines (helper amortizes the 50-line block across both call
sites). Existing test coverage (TestChatUpload_NoInboundSecret_*,
TestChatDownload_NoInboundSecret_* from PR #2370) covers all four
branches of the shared helper.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
#2367 moved PARENT_ID env injection from inline TeamHandler.Expand
into the shared prepareProvisionContext (sourced from
payload.ParentID). The test was missing — a regression that:
- dropped the injection
- inverted the nil-check
- leaked an empty PARENT_ID="" into env
would not fail any existing test, but workspace/coordinator.py reads
PARENT_ID on startup to track parent-child relationship, so the
breakage would surface only at runtime.
Adds TestPrepareProvisionContext_ParentIDInjection with three
sub-cases:
- nil ParentID → no PARENT_ID env
- empty-string ParentID → no PARENT_ID env (don't pollute)
- set ParentID → PARENT_ID env equals value
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The 2026-04-30 lazy-heal fix in chat_files.go (PR #2366) ATTEMPTS to
mint platform_inbound_secret on miss so legacy workspaces self-heal
without requiring destructive reprovision. The pre-existing
TestChatUpload_NoInboundSecret + TestChatDownload_NoInboundSecret
tests asserted the 503 response shape but did NOT pin that the mint
UPDATE actually fires — they happened to exercise the mint-failure
branch (sqlmock unmatched UPDATE = error = "Failed to mint" code path
returns 503 with "RFC #2312" detail, which still passed the original
assertions).
This means a regression that:
- skipped the lazy-heal mint entirely
- inverted the success/failure response branches
- moved the mint to a different code path
would not fail those tests.
Fix:
- TestChatUpload_NoInboundSecret_LazyHeal: mock the UPDATE
successfully; assert sqlmock.ExpectationsWereMet (mint MUST run)
+ body contains "retry" + "30" (success branch).
- TestChatUpload_NoInboundSecret_LazyHealFailure: mock the UPDATE
to fail; assert body contains "Reprovision" (failure branch).
- Same pair for the Download handler — independent code path means
independent test.
Pins both branches of both handlers (4 tests) so future drift trips
the gate.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes#2367.
TeamHandler.Expand provisioned child workspaces by directly calling
h.provisioner.Start, skipping mintWorkspaceSecrets and every other
preflight (secrets load, env mutators, identity injection, missing-env,
empty-config-volume auto-recover). Children shipped with NULL
platform_inbound_secret + never-issued auth_token — same drift class as
the SaaS bug just fixed in PR #2366, found while exercising a stronger
gate against this package.
Fix:
- TeamHandler now holds *WorkspaceHandler. Expand delegates each child
provision to wh.provisionWorkspace, picking up the shared
prepare/mint/preflight pipeline automatically. Future provision-time
steps go in ONE place and team-expand inherits them.
- prepareProvisionContext gains PARENT_ID env injection sourced from
payload.ParentID (which Expand now populates). This preserves the
signal workspace/coordinator.py reads on startup, without threading
env through provisioner.WorkspaceConfig manually.
- NewTeamHandler signature gains *WorkspaceHandler; router passes it.
Gate upgrade:
- TestProvisionFunctions_AllCallMintWorkspaceSecrets is now
behavior-based: it walks every FuncDecl in the package and flags any
function that calls h.provisioner.Start or h.cpProv.Start without
also calling mintWorkspaceSecrets. Drift-resistant by construction —
a future provision function with any name still trips the gate.
- Replaces the name-list version from PR #2366. The name list missed
Expand precisely because Expand wasn't named provision*; the
behavior-based detector caught it spontaneously when prototyped.
Tests: full workspace-server module green; gate previously verified to
fire red on Expand pre-fix and on deliberate mintWorkspaceSecrets
removal.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Root cause of 2026-04-30 silent-503 chat-upload bug: provisionWorkspaceCP
(SaaS) skipped issueAndInjectInboundSecret while provisionWorkspaceOpts
(Docker) called it. Every prod SaaS workspace provisioned with NULL
platform_inbound_secret → upload returned 503 with the v2-enrollment
message on every attempt.
Structural fix:
- Extract prepareProvisionContext (secrets load, env mutators, preflight,
cfg build), mintWorkspaceSecrets (auth_token + platform_inbound_secret),
markProvisionFailed (broadcast + DB update) into workspace_provision_shared.go
- Refactor both provision modes to call the shared helpers
- Add provisionAbort struct so the missing-env failure class can carry its
structured "missing" payload through the shared abort path
- Unify last_sample_error: previously the decrypt-fail path skipped it while
others set it; users now see every failure class in the UI
Drift prevention:
- AST gate TestProvisionFunctions_AllCallMintWorkspaceSecrets asserts every
function in the provisionFunctions set calls mintWorkspaceSecrets at least
once (same shape as the audit-coverage gate from #335). New provision paths
must either call mint or be added to provisionExemptFunctions with a
one-line justification
- Behavioral test TestMintWorkspaceSecrets_PersistsInboundSecretInSaaSMode
pins the contract: SaaS mode MUST persist platform_inbound_secret to the DB
column even though it skips file injection
Existing-workspace recovery (chat_files.go lazy-heal):
- Upload + Download handlers detect NULL platform_inbound_secret and call
IssuePlatformInboundSecret inline, returning 503 with retry_after_seconds=30
- Self-heals workspaces that were provisioned before this fix without
requiring destructive reprovision
Tests: full handlers + workspace-server module green; AST gate verified to
fire red on deliberate violation (commented-out mint call surfaces the
exact function name + actionable remediation message).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Independent review on PR #2362 caught: the dead-agent classifier at
a2a_proxy.go included 502/503/504/524 but missed the rest of the CF
origin-failure family (521/522/523), which are MORE indicative of a
dead EC2 than 524:
- 521 "Web server is down" — CF can't open TCP to origin (most direct
dead-EC2 signal; fires when the workspace EC2 has been terminated
and CF still has the CNAME pointing at it).
- 522 "Connection timed out" — TCP didn't complete in ~15s (typical
of SG/NACL flap or agent process hung on accept).
- 523 "Origin is unreachable" — CF can't route to origin (DNS gone,
network path broken).
Pre-fix any of these would propagate as-is to the canvas and the user
would see a 5xx without the reactive auto-restart firing — exactly
the SaaS-blind class of failure PR #2362 was meant to close.
Refactor: extracted isUpstreamDeadStatus(int) helper so the matrix is
in one place, with TestIsUpstreamDeadStatus locking in 18 status
codes (7 dead, 11 not-dead including 520 and 525 which look CF-shaped
but indicate different failures).
Also tightened TestStopForRestart_NoProvisioner_NoOp per the same
review: now uses sqlmock.ExpectationsWereMet to assert the dispatcher
doesn't touch the DB on the both-nil path.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Backward-compat replay gate for the A2A JSON-RPC protocol surface.
Every PR that touches normalizeA2APayload OR bumps the a-2-a-sdk
version pin runs every shape in testdata/a2a_corpus/ through the
current code and asserts:
valid/ — every shape MUST parse without error and produce a
canonical v0.3 payload (params.message.parts list).
invalid/ — every shape MUST be rejected with the documented
status code and error substring.
What this prevents
The 2026-04-29 v0.2 → v0.3 silent-drop bug (PR #2349) shipped
because the SDK bump PR didn't replay v0.2-shaped inputs against
the new code; the shape-mismatch surfaced only in production when
the receiver's Pydantic validator silently rejected inbound
messages.
This gate would have caught it pre-merge. Hand-verified: reverting
the v0.2 string→parts shim in normalizeA2APayload fails 3 of the
v0.2 corpus entries with the exact rejection class the production
bug exhibited.
Corpus contents (11 entries)
valid/ (10):
v0_2_string_content — basic v0.2 (the broken case)
v0_2_string_content_no_message_id — v0.2 + auto-fill messageId
v0_2_list_content — v0.2 with content as Part list
v0_3_parts_text_only — canonical v0.3
v0_3_parts_multi_text — multi-Part list
v0_3_parts_with_file — multimodal (text + file)
v0_3_parts_with_context — contextId for multi-turn
v0_3_streaming_method — message/stream variant
v0_3_unicode_text — emoji + multi-script
v0_3_long_text — 10KB text Part
no_jsonrpc_envelope — bare params/method without
outer envelope (legacy senders)
invalid/ (3):
no_content_or_parts — message has neither field
content_is_integer — wrong type for v0.2 content
content_is_bool — wrong type, separate from int
so the failure msg identifies
which type-class regressed
Plus 4 inline malformed-JSON cases (truncated, not-JSON, empty,
whitespace) that can't be expressed as JSON corpus entries.
Coverage tests
The gate has 4 test functions:
1. TestA2ACorpus_ValidShapesParse — replay valid/ corpus,
assert no error + canonical v0.3 output (parts list non-empty,
messageId non-empty, content field deleted).
2. TestA2ACorpus_InvalidShapesRejected — replay invalid/ corpus,
assert rejection matches recorded status + error substring.
3. TestA2ACorpus_MalformedJSONRejected — inline cases for
non-parseable bodies.
4. TestA2ACorpus_HasMinimumCoverage — at least one v0.2 +
one v0.3 entry exists (loses neither side of the bridge).
5. TestA2ACorpus_EveryEntryHasMetadata — _comment/_added/_source
on every entry per the README policy; _expect_error and
_expect_status on invalid entries.
Documentation
testdata/a2a_corpus/README.md describes the corpus contract:
- When to add entries (new SDK shape, new production-observed
shape).
- When NOT to add (test scaffolding, hypothetical futures).
- Removal policy (breaking change, deprecation window required).
Verification
- All 24 corpus subtests pass on current main.
- Hand-test: revert the v0.2 compat shim → 3 v0.2 entries fail
the gate with the exact rejection class the production bug
exhibited. Confirmed.
- Whole-module go test ./... green.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Addresses code-review C1 (test goroutine race) and I2 (CF 524) on PR #2362.
C1: TestRunRestartCycle_SaaSPath_DispatchesViaCPProv invoked runRestartCycle
end-to-end, which spawns `go h.sendRestartContext(...)`. That goroutine
outlived the test, then read db.DB while the next test's setupTestDB wrote
to it — DATA RACE under -race, cascading 30+ failures across the handlers
suite. Refactored: extracted `stopForRestart(ctx, id)` from runRestartCycle
as a pure dispatcher, and rewrote the SaaS-path test to call it directly
(no async goroutine spawned). Added a no-provisioner no-op guard test.
I2: Cloudflare 524 ("origin timed out") now triggers maybeMarkContainerDead
alongside 502/503/504. Same upstream signal — origin agent unresponsive.
Verified `go test -race -count=1 ./internal/handlers/...` green locally.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Independent review of #2362 caught a Critical gap: the previous commit
fixed the Stop dispatch in runRestartCycle but left the provisionWorkspace
dispatch unconditionally Docker-only. So on SaaS the auto-restart cycle
would Stop the EC2 successfully (good), then NPE inside provisionWorkspace's
`h.provisioner.VolumeHasFile` call. coalesceRestart's recover()-without-
re-raise (a deliberate platform-stability safeguard) silently swallowed
the panic, leaving the workspace permanently stuck in status='provisioning'
because the UPDATE on workspace_restart.go:450 had already run.
Net pre-amendment effect on SaaS: dead agent → structured 503 (good) →
workspace flipped to 'offline' (good) → cpProv.Stop succeeded (good) →
provisionWorkspace NPE swallowed (bad) → workspace permanently
'provisioning' until manual canvas restart. The headline claim of #2362
("SaaS auto-restart now works") was false on the path it shipped.
Fix: dispatch the reprovision call the same way every other call site
in the package does (workspace.go:431-433, workspace_restart.go:197+596) —
branch on `h.cpProv != nil` and call provisionWorkspaceCP for SaaS,
provisionWorkspace for Docker.
Tests:
- New TestRunRestartCycle_SaaSPath_DispatchesViaCPProv asserts cpProv.Stop
is called when the SaaS path runs (would have caught the NPE if
provisionWorkspace had been called instead).
- fakeCPProv updated: methods record calls and return nil/empty by
default rather than panicking. The previous "panic on unexpected call"
pattern was unsafe — the panic fires on the async restart goroutine
spawned by maybeMarkContainerDead AFTER the test assertions ran, so
the test passed by accident even though the production path was
broken (which is exactly how the Critical bug landed).
- Existing tests still pass (full handlers + provisioner suites green).
Branch-count audit refresh:
runRestartCycle dispatch decisions:
1. h.provisioner != nil → provisioner.Stop + provisionWorkspace ✓ (existing tests)
2. h.cpProv != nil → cpProv.Stop + provisionWorkspaceCP ✓ (NEW test)
3. both nil → coalesceRestart never called (RestartByID gate) ✓
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Class-of-bugs fix surfaced by hongmingwang.moleculesai.app's canvas chat
to a dead workspace returning a generic Cloudflare 502 page on
2026-04-30. Three independent gaps in the reactive-health path that
together leak dead-agent failures to canvas with no auto-recovery.
## Bug 1 — maybeMarkContainerDead is a no-op for SaaS tenants
`maybeMarkContainerDead` only consulted `h.provisioner` (local Docker
provisioner). SaaS tenants set `h.cpProv` (CP-backed EC2 provisioner)
and leave `h.provisioner` nil — so the function early-returned false
on every call and dead EC2 agents never triggered the offline-flip /
broadcast / restart cascade.
Fix: extend `CPProvisionerAPI` interface with `IsRunning(ctx, id)
(bool, error)` (already implemented on `*CPProvisioner`; just needs
to surface on the interface). `maybeMarkContainerDead` now branches:
local-Docker path uses `h.provisioner.IsRunning`; SaaS path uses
`h.cpProv.IsRunning` which calls the CP's `/cp/workspaces/:id/status`
endpoint to read the EC2 state.
## Bug 2 — RestartByID short-circuits on `h.provisioner == nil`
Same shape as Bug 1: the auto-restart cascade triggered by
`maybeMarkContainerDead` calls `RestartByID` which short-circuited
when the local Docker provisioner was missing. So even if Bug 1 were
fixed, the workspace-offline state would never recover.
Fix: change the gate to `h.provisioner == nil && h.cpProv == nil`
and update `runRestartCycle` to branch on which provisioner is
wired for the Stop call. (The HTTP `Restart` handler already does
this branching correctly — we're just bringing the auto-restart path
to parity.)
## Bug 3 — upstream 502/503/504 propagated as-is, masked by Cloudflare
When the agent's tunnel returns 5xx (the "tunnel up but no origin"
shape — agent process dead but cloudflared connection still healthy),
`dispatchA2A` returns successfully at the HTTP layer with a 5xx body.
`handleA2ADispatchError`'s reactive-health path doesn't run because
that path is only triggered on transport-level errors. The pre-fix
code propagated the 502 status to canvas; Cloudflare in front of the
platform then masked the 502 with its own opaque "error code: 502"
page, hiding any structured response and any Retry-After hint.
Fix: in `proxyA2ARequest`, when the upstream returns 502/503/504, run
`maybeMarkContainerDead` BEFORE propagating. If IsRunning confirms
the agent is dead → return a structured 503 with restarting=true +
Retry-After (CF doesn't mask 503s the same way). If running, propagate
the original status (don't recycle a healthy agent on a transient
hiccup — it might have legitimately returned 502).
## Drive-by — a2aClient transport timeouts
a2aClient was `&http.Client{}` with no Transport timeouts. When a
workspace's EC2 black-holes TCP connects (instance terminated mid-flight,
SG flipped, NACL bug), the OS default is 75s on Linux / 21s on macOS —
long enough for Cloudflare's ~100s edge timeout to fire first and
surface a generic 502. Added DialContext (10s connect), TLSHandshake
(10s), and ResponseHeaderTimeout (60s). Client.Timeout DELIBERATELY
unset — that would pre-empt slow-cold-start flows (Claude Code OAuth
first-token, multi-minute agent synthesis). Long-tail body streaming
is still governed by per-request context deadline.
## Tests
- `TestMaybeMarkContainerDead_CPOnly_NotRunning` — IsRunning(false) →
marks workspace offline, returns true.
- `TestMaybeMarkContainerDead_CPOnly_Running` — IsRunning(true) →
no offline-flip, returns false (don't recycle a healthy agent).
- `TestProxyA2A_Upstream502_TriggersContainerDeadCheck` — agent server
returns 502 + cpProv reports dead → caller gets 503 with restarting=
true and Retry-After: 15.
- `TestProxyA2A_Upstream502_AliveAgent_PropagatesAsIs` — same upstream
502 but cpProv reports running → propagates 502 (existing behavior;
safety check that prevents over-eager recycling).
- Existing `TestMaybeMarkContainerDead_NilProvisioner` /
`TestMaybeMarkContainerDead_ExternalRuntime` still pass.
- Full handlers + provisioner test suites pass.
## Impact
Pre-fix: dead EC2 agent on a SaaS tenant → CF-masked 502 to canvas, no
auto-recovery, manual restart from canvas required.
Post-fix: dead EC2 agent on a SaaS tenant → structured 503 with
restarting=true + Retry-After to canvas, workspace flipped to offline,
auto-restart cycle triggered. Canvas can show a user-actionable
"agent is restarting, please wait" message instead of a generic 502.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Telegram getUpdates / Slack RTM shape: poll-mode workspaces pass the id
of the last activity_logs row they consumed, server returns rows
strictly after in chronological (ASC) order. Existing callers that don't
pass since_id keep DESC + most-recent-N — backwards-compatible.
Cursor lookup is scoped by workspace_id so a caller cannot enumerate or
peek at another workspace's events by passing a UUID belonging to a
different workspace. Cross-workspace and pruned cursors both return
410 Gone — no information leak (caller cannot distinguish "row never
existed" from "row exists but you can't see it").
since_id + since_secs both apply (AND). When since_id is set the order
flips to ASC because polling consumers need recorded-order; the recent-
feed shape (no since_id) keeps DESC.
Tests:
- TestActivityHandler_SinceID_ReturnsNewerASC — cursor lookup → main
query with cursorTime + ASC ordering.
- TestActivityHandler_SinceID_CursorNotFound_410 — pruned/unknown cursor.
- TestActivityHandler_SinceID_CrossWorkspaceCursor_410 — UUID belongs to
another workspace, scoped lookup hides it (same 410 path, no leak).
- TestActivityHandler_SinceID_CombinedWithSinceSecs — placeholder index
arithmetic with both filters.
Stacked on #2353 (PR 2: poll-mode short-circuit).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Skip SSRF/dispatch and queue to activity_logs for delivery_mode=poll
workspaces. The polling agent (e.g. molecule-mcp-claude-channel on an
operator's laptop) consumes via GET /activity?since_id= in PR 3 — no
public URL needed.
Order: budget -> normalize -> lookupDeliveryMode short-circuit ->
resolveAgentURL. Normalizing before the short-circuit keeps the
JSON-RPC method name on the activity_logs row so the polling agent
can dispatch correctly.
Fail-closed-to-push: any DB error reading delivery_mode defaults to
push (loud + recoverable) rather than poll (silent drop).
Tests:
- TestProxyA2A_PollMode_ShortCircuits_NoSSRF_NoDispatch — core invariant:
no resolveAgentURL, no Do(), records to activity_logs, returns 200
{status:"queued",delivery_mode:"poll",method:"message/send"}.
- TestProxyA2A_PushMode_NoShortCircuit — push path unaffected; the agent
server actually receives the request.
- TestProxyA2A_PollMode_FailsClosedToPush — DB error on mode lookup
must NOT silently queue; falls through to the push path.
Stacked on #2348 (PR 1: schema + register flow).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Hard gate #4: codified module boundaries as Go tests, so a new
contributor (or AI agent) can't silently land an import that crosses
a layer.
Boundaries enforced (one architecture_test.go per package):
- wsauth has no internal/* deps — auth leaf, must be unit-testable in
isolation
- models has no internal/* deps — pure-types leaf, reverse dep would
create cycles since most packages depend on models
- db has no internal/* deps — DB layer below business logic, must be
testable with sqlmock without spinning up handlers/provisioner
- provisioner does not import handlers or router — unidirectional
layering: handlers wires provisioner into HTTP routes; the reverse
is a cycle
Each test parses .go files in its package via go/parser (no x/tools
dep needed) and asserts forbidden import paths don't appear. Failure
messages name the rule, the offending file, and explain WHY the
boundary exists so the diff reviewer learns the rule.
Note: the original issue's first two proposed boundaries
(provisioner-no-DB, handlers-no-docker) don't match the codebase
today — provisioner already imports db (PR #2276 runtime-image
lookup) and handlers hold *docker.Client directly (terminal,
plugins, bundle, templates). I picked the four boundaries that
actually hold; the first two are aspirational and would need a
refactor before they could be codified.
Hand-tested by injecting a deliberate wsauth -> orgtoken violation:
the gate fires red with the rule message before merge.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes#2345.
## Symptom
Design Director silently dropped A2A briefs whose sender used the
v0.2 message format (`params.message.content` string) instead of v0.3
(`params.message.parts` part-list). The downstream a2a-sdk's v0.3
Pydantic validator rejected with "params.message.parts — Field
required" but the rejection only landed in tenant-side logs; the
sender saw HTTP 200/202 and assumed delivery.
UX Researcher therefore never received the kickoff. Multi-agent
pipeline silently idle.
## Fix
Convert at the proxy edge in normalizeA2APayload. Two cases handled,
one explicitly rejected:
v0.2 string content → wrap as [{kind: text, text: <content>}]
(the canonical v0.2 case from the dogfooding
report)
v0.2 list content → preserve list as parts (some older clients
put a list under `content`; treat as "client
meant parts, used wrong field name")
v0.3 parts present → no-op (hot path for normal traffic)
Neither present → return HTTP 400 with structured JSON-RPC
error pointing at the missing field
Why at the proxy edge: every workspace gets the compat for free
without each one bumping a2a-sdk separately. The SDK's own compat
adapter is strict about `parts` and rejects v0.2 senders.
Why reject loud on missing-both: pre-fix the SDK's Pydantic
rejection was post-handler-dispatch and invisible to the original
sender. Now misshapen payloads return a structured 400 to the actual
caller — kills the entire silent-drop class for this payload-shape
category.
## Tests
7 new cases on normalizeA2APayload (#2345) + 1 fixture update on the
existing _MissingMethodReturnsEmpty test:
TestNormalizeA2APayload_ConvertsV02StringContentToParts
TestNormalizeA2APayload_ConvertsV02ListContentToParts
TestNormalizeA2APayload_PreservesV03Parts (hot path)
TestNormalizeA2APayload_RejectsMessageWithNeitherContentNorParts
TestNormalizeA2APayload_RejectsContentWithUnsupportedType
TestNormalizeA2APayload_NoMessageNoCheck (e.g. tasks/list bypasses)
All 11 normalizeA2APayload tests pass + full handler suite (no
regressions).
## Refs
Hard-gates discussion: this is exactly the class of failure
(silent-drop on schema mismatch) that #2342 (continuous synthetic
E2E) would catch automatically. Tier 2 RFC item from #2345 (caller
gets structured JSON-RPC error on parse failure) is delivered above
via the loud-reject path.
Adds workspaces.delivery_mode (push, default | poll) and lets the register
handler accept poll-mode workspaces with no URL. This is the foundation
for the unified poll/push delivery design in #2339 — Telegram-getUpdates
shape for external runtimes that have no public URL.
What this PR does:
- Migration 045: NOT NULL TEXT column, default 'push', CHECK constraint
on the two valid values.
- models.Workspace + RegisterPayload + CreateWorkspacePayload gain a
DeliveryMode field. RegisterPayload.URL drops the `binding:"required"`
tag — the handler now enforces it conditionally on the resolved mode.
- Register handler: validates explicit delivery_mode if set; resolves
effective mode (payload value, else stored row value, else push) AFTER
the C18 token check; validates URL only when effective mode is push;
persists delivery_mode in the upsert; returns it in the response;
skips URL caching when payload.URL is empty.
- CreateWorkspace handler: persists delivery_mode (defaults to push) in
the same INSERT, validates it before any side effects.
What this PR does NOT do (intentional, follow-up PRs):
- PR 2: short-circuit ProxyA2A for poll-mode workspaces (skip SSRF +
dispatch, log a2a_receive activity, return 200).
- PR 3: since_id cursor on GET /activity for lossless polling.
- Plugin v0.2 in molecule-mcp-claude-channel: cursor persistence + a
register helper that creates poll-mode workspaces.
Backwards compatibility: every existing workspace stays push-mode (schema
default) with identical behavior. New tests:
TestRegister_PollMode_AcceptsEmptyURL,
TestRegister_PushMode_RejectsEmptyURL,
TestRegister_InvalidDeliveryMode,
TestRegister_PollMode_PreservesExistingValue. All existing register +
create tests updated to expect the new delivery_mode column in the
INSERT args.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The header comment claimed:
"file upload (HTTP-forward) + download (Docker-exec)"
and:
"Download still uses the v1 docker-cp path; migrating it lives in
the next PR in this stack"
Both wrong now. RFC #2312 PR-D landed the Download HTTP-forward path:
chat_files.go:336 builds an http.NewRequestWithContext to
${wsURL}/internal/file/read?path=<abs>, with the response streamed
back to the caller. The workspace-side Starlette handler is at
workspace/internal_file_read.py, mounted at workspace/main.py:440.
Update the header to reflect actual code: both upload + download are
HTTP-forward, share the same per-workspace platform_inbound_secret
auth, and work uniformly on local Docker and SaaS EC2.
Pure docs change — no behavior, no build/test impact.
Closes the observability gap surfaced in #2329 item 5: callers received
queue_id in the 202 enqueue response but had no public lookup. The only
existing observability path was check_task_status (delegation-flavored
A2A only — joins via request_body->>'delegation_id'). Cross-workspace
peer-direct A2A had no observability after enqueue.
This PR ships RFC #2331's Tier 1: minimum viable observability + caller-
specified TTL. No schema migration — expires_at column already exists
(migration 042); only DequeueNext was honoring it, with no caller path
to populate it.
Two changes:
1. extractExpiresInSeconds(body) — new helper mirroring
extractIdempotencyKey/extractDelegationIDFromBody. Pulls
params.expires_in_seconds from the JSON-RPC body. Zero (the unset
default) preserves today's infinite-TTL semantics. EnqueueA2A grew
an expiresAt *time.Time parameter; the proxy callsite computes
*time.Time from the extracted seconds and threads it through to
the INSERT.
2. GET /workspaces/:id/a2a/queue/:queue_id — new public handler.
Auth: caller's workspace token must match queue.caller_id OR
queue.workspace_id, OR be an org-level token. 404 (not 403) on
auth failure to avoid leaking queue_id existence. Response
includes status/attempts/last_error/timestamps/expires_at; embeds
response_body via LEFT JOIN against activity_logs when status=
completed for delegation-flavored items.
What this does NOT change:
- Drain semantics (heartbeat-driven dispatch).
- Native-session bypass (claude-agent-sdk, hermes still skip queue).
- Schema (column already exists).
- MCP tools (delegate_task_async / check_task_status keep their
contract; this is a parallel queue-id surface).
Tests:
- 7 cases on extractExpiresInSeconds covering absent/positive/
zero/negative/invalid-JSON/wrong-type/empty-params.
- go vet + go build clean.
- Full handlers test suite passes (no regressions from the
EnqueueA2A signature change — only one production caller).
Tier 2 (cross-workspace stitch + webhook callback) and Tier 3
(controllerized lifecycle) deferred per RFC #2331.
External callers (third-party SDKs, the channel plugin) authenticate
purely via bearer and frequently don't set the X-Workspace-ID header.
Without this, activity_logs.source_id ends up NULL — breaking the
peer_id signal on notifications, the "Agent Comms by peer" canvas tab,
and any analytics that breaks down inbound A2A by sender.
The bearer is the authoritative caller identity per the wsauth contract
(it's what proves who you are); the header is a display/routing hint
that must agree with it. So we derive callerID from the bearer's owning
workspace whenever the header is absent. The existing validateCallerToken
guard fires after this and enforces token-to-callerID binding the same
way it always has.
Org-token requests are skipped — those grant org-wide access and don't
bind to a single workspace, so the canvas-class semantics (callerID="")
are preserved. Bearer-resolution failures (revoked, removed workspace)
fall through to canvas-class as well, never 401.
New wsauth.WorkspaceFromToken exposes the bearer→workspace lookup as a
modular interface; mirrors ValidateAnyToken's defense-in-depth JOIN on
workspaces.status != 'removed'.
Tests: 4 unit tests on WorkspaceFromToken + 3 integration tests on
ProxyA2A covering the three observable paths (bearer-derived,
org-token skipped, derive-failure fallthrough).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Mirrors PR-C's Upload migration: replaces the docker-cp tar-stream
extraction with a streaming HTTP GET to the workspace's own
/internal/file/read endpoint. Closes the SaaS gap for downloads —
without this PR, GET /workspaces/:id/chat/download still returns 503
on Railway-hosted SaaS even after A+B+C+F land.
Stacks: PR-A #2313 → PR-B #2314 → PR-C #2315 → PR-F #2319 → this PR.
Why a single broad /internal/file/read instead of /internal/chat/download:
Today's chat_files.go::Download already accepts paths under any of the
four allowed roots {/configs, /workspace, /home, /plugins} — it's not
strictly chat. Future PRs (template export, etc.) will reuse this
endpoint via the same forward pattern; reusing avoids three near-
identical handlers (one per domain) with duplicated path-safety logic.
Path safety is duplicated on platform + workspace sides — defence in
depth via two parallel checks, not "trust the workspace."
Changes:
* workspace/internal_file_read.py — Starlette handler. Validates path
(must be absolute, under allowed roots, no traversal, canonicalises
cleanly). lstat (not stat) so a symlink at the path doesn't redirect
the read. Streams via FileResponse (no buffering). Mirrors Go's
contentDispositionAttachment for Content-Disposition header.
* workspace/main.py — registers GET /internal/file/read alongside the
POST /internal/chat/uploads/ingest from PR-B.
* scripts/build_runtime_package.py — adds internal_file_read to
TOP_LEVEL_MODULES so the publish-runtime cascade rewrites its
imports correctly. Also includes the PR-B additions
(internal_chat_uploads, platform_inbound_auth) since this branch
was rooted before PR-B's drift-gate fix; merge-clean alphabetic
additions.
* workspace-server/internal/handlers/chat_files.go — Download
rewritten as streaming HTTP GET forward. Resolves workspace URL +
platform_inbound_secret (same shape as Upload), builds GET request
with path query param, propagates response headers (Content-Type /
Content-Length / Content-Disposition) + body. Drops archive/tar
+ mime imports (no longer needed). Drops Docker-exec branch entirely
— Download is now uniform across self-hosted Docker and SaaS EC2.
* workspace-server/internal/handlers/chat_files_test.go — replaces
TestChatDownload_DockerUnavailable (stale post-rewrite) with 4
new tests:
- TestChatDownload_WorkspaceNotInDB → 404 on missing row
- TestChatDownload_NoInboundSecret → 503 on NULL column
(with RFC #2312 detail in body)
- TestChatDownload_ForwardsToWorkspace_HappyPath → forward shape
(auth header, GET method, /internal/file/read path) + headers
propagated + body byte-for-byte
- TestChatDownload_404FromWorkspacePropagated → 404 from
workspace propagates (NOT remapped to 500)
Existing TestChatDownload_InvalidPath path-safety tests preserved.
* workspace/tests/test_internal_file_read.py — 21 tests covering
_validate_path matrix (absolute, allowed roots, traversal, double-
slash, exact-match-on-root), 401 on missing/wrong/no-secret-file
bearer, 400 on missing path/outside-root/traversal, 404 on missing
file, happy-path streaming with correct Content-Type +
Content-Disposition, special-char escaping in Content-Disposition,
symlink-redirect-rejection (lstat-not-stat protection).
Test results:
* go test ./internal/handlers/ ./internal/wsauth/ — green
* pytest workspace/tests/ — 1292 passed (was 1272 before PR-D)
Refs #2312 (parent RFC), #2308 (chat upload+download 503 incident).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the SaaS-side gap that PR-A acknowledged but didn't fix: SaaS
workspaces have no persistent /configs volume, so the platform_inbound_secret
that PR-A's provisioner wrote at workspace creation never reaches the
runtime. Without this, even after the entire RFC #2312 stack lands,
SaaS chat upload would 401 (workspace fails-closed when /configs/.platform_inbound_secret
is missing).
Solution: return the secret in the /registry/register response body
on every register call. The runtime extracts it and persists to
/configs/.platform_inbound_secret at mode 0600. Idempotent — Docker-
mode workspaces also receive it and overwrite the value the provisioner
already wrote (same value until rotation).
Why on every register, not just first-register:
* SaaS containers can be restarted (deploys, drains, EBS detach/
re-attach) — /configs is rebuilt empty on each fresh start.
* The auth_token is "issue once" because re-issuing rotates and
invalidates the previous one. The inbound secret has no rotation
flow yet (#2318) so re-sending the same value is harmless.
* Eliminates the bootstrap window where a restarted SaaS workspace
has no inbound secret on disk and would 401 every platform call.
Changes:
* workspace-server/internal/handlers/registry.go — Register handler
reads workspaces.platform_inbound_secret via wsauth.ReadPlatformInboundSecret
and includes it in the response body. Legacy workspaces (NULL
column) get a successful registration with the field omitted.
* workspace-server/internal/handlers/registry_test.go — two new tests:
- TestRegister_ReturnsPlatformInboundSecret_RFC2312_PRF: secret
present in DB → secret in response, alongside auth_token.
- TestRegister_NoInboundSecret_OmitsField: NULL column → field
omitted, registration still 200.
* workspace/platform_inbound_auth.py — adds save_inbound_secret(secret).
Atomic write via tmp + os.replace, mode 0600 from os.open(O_CREAT,
0o600) so a concurrent reader never sees 0644-default. Resets the
in-process cache after write so the next get_inbound_secret() returns
the freshly-written value (rotation-safe when it lands).
* workspace/main.py — register-response handler extracts
platform_inbound_secret alongside auth_token and persists via
save_inbound_secret. Mirrors the existing save_token pattern.
* workspace/tests/test_platform_inbound_auth.py — 6 new tests for
save_inbound_secret: writes file, mode 0600, overwrite-existing,
cache invalidation after save, empty-input no-op, parent-dir creation
for fresh installs.
Test results:
* go test ./internal/handlers/ ./internal/wsauth/ — all green
* pytest workspace/tests/ — 1272 passed (was 1266 before this PR)
Refs #2312 (parent RFC), #2308 (chat upload 503 incident).
Stacks: PR-A #2313 → PR-B #2314 → PR-C #2315 → this PR.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Self-review found the original draft of this PR added forward-time
validateAgentURL() as defense-in-depth — paranoia layer on top of the
existing register-time gate. The validator unconditionally blocks
loopback (127.0.0.1/8), which makes httptest-based proxy tests
impossible without an env-var hatch I'd rather not add to a security-
critical path on first pass.
Trust note kept inline pointing at the upstream gate + tracking issue
so the gap is explicit, not invisible.
Refs #2312.
Closes the SaaS upload gap (#2308) with the unified architecture from
RFC #2312: same code path on local Docker and SaaS, no Docker socket
dependency, no `dockerCli == nil` cliff. Stacked on PR-A (#2313) +
PR-B (#2314).
Before:
Upload → findContainer (nil in SaaS) → 503
After:
Upload → resolve workspaces.url + platform_inbound_secret
→ stream multipart to <url>/internal/chat/uploads/ingest
→ forward response back unchanged
Same call site whether the workspace runs on local docker-compose
("http://ws-<id>:8000") or SaaS EC2 ("https://<id>.<tenant>...").
The bug behind #2308 cannot exist by construction.
Why streaming, not parse-then-re-encode:
* No 50 MB intermediate buffer on the platform
* Per-file size + path-safety enforcement is the workspace's job
(see workspace/internal_chat_uploads.py, PR-B)
* Workspace's error responses (413 with offending filename, 400 on
missing files field, etc.) propagate through unchanged
Changes:
* workspace-server/internal/handlers/chat_files.go — Upload rewritten
as a streaming HTTP proxy. Drops sanitizeFilename, copyFlatToContainer,
and the entire docker-exec path. ChatFilesHandler gains an httpClient
(broken out for test injection). Download stays docker-exec for now;
follow-up PR will migrate it to the same shape.
* workspace-server/internal/handlers/chat_files_external_test.go —
deleted. Pinned the wrong-headed runtime=external 422 gate from
#2309 (already reverted in #2311). Superseded by the proxy tests.
* workspace-server/internal/handlers/chat_files_test.go — replaced
sanitize-filename tests (now in workspace/tests/test_internal_chat_uploads.py)
with sqlmock + httptest proxy tests:
- 400 invalid workspace id
- 404 workspace row missing
- 503 platform_inbound_secret NULL (with RFC #2312 detail)
- 503 workspaces.url empty
- happy-path forward (asserts auth header, content-type forwarded,
body streamed, response propagated back)
- 413 from workspace propagated unchanged (NOT remapped to 500)
- 502 on workspace unreachable (connect refused)
Existing Download + ContentDisposition tests preserved.
* tests/e2e/test_chat_upload_e2e.sh — single-script-everywhere E2E.
Takes BASE as env (default http://localhost:8080). Creates a
workspace, waits for online, mints a test token, uploads a fixture,
reads it back via /chat/download, asserts content matches +
bearer-required. Same script runs against staging tenants (set
BASE=https://<id>.<tenant>.staging.moleculesai.app).
Test plan:
* go build ./... — green
* go test ./internal/handlers/ ./internal/wsauth/ — green (full suite)
* tests/e2e/test_chat_upload_e2e.sh against local docker-compose
after PR-A + PR-B + this PR all merge — TODO before merge
Refs #2312 (parent RFC), #2308 (chat upload 503 incident).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Foundation for the HTTP-forward architecture that replaces Docker-exec
in chat upload + 5 follow-on handlers. This PR is intentionally scoped
to schema + token mint + provisioner wiring; no caller reads the secret
yet so behavior is unchanged.
Why a second per-workspace bearer (not reuse the existing
workspace_auth_tokens row):
workspace_auth_tokens workspaces.platform_inbound_secret
───────────────────── ─────────────────────────────────
workspace → platform platform → workspace
hash stored, plaintext gone plaintext stored (platform reads back)
workspace presents bearer platform presents bearer
platform validates by hash workspace validates by file compare
Distinct roles, distinct rotation lifecycle, distinct audit signal —
splitting later would require a fleet-wide rolling rotation, so paying
the schema cost up front.
Changes:
* migration 044: ADD COLUMN workspaces.platform_inbound_secret TEXT
* wsauth.IssuePlatformInboundSecret + ReadPlatformInboundSecret
* issueAndInjectInboundSecret hook in workspace_provision: mints
on every workspace create / re-provision; Docker mode writes
plaintext to /configs/.platform_inbound_secret alongside .auth_token,
SaaS mode persists to DB only (workspace will receive via
/registry/register response in a follow-up PR)
* 8 unit tests against sqlmock — covers happy path, rotation, NULL
column, empty string, missing workspace row, empty workspaceID
PR-B (next) wires up workspace-side `/internal/chat/uploads/ingest`
that validates the bearer against /configs/.platform_inbound_secret.
Refs #2312 (parent RFC), #2308 (chat upload 503 incident).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
PR #2309 added an early-return that 422'd uploads to external workspaces
with "file upload not supported." Both halves of that diagnosis were wrong:
1. External workspaces SHOULD support uploads — gating with 422
locks off intended functionality and labels it as design.
2. The 503 the user actually hit was on an INTERNAL workspace, not
an external one. The runtime check never even ran.
Real root cause (separate fix incoming):
- findContainer(...) requires a non-nil h.docker.
- In SaaS (MOLECULE_ORG_ID set), main.go selects the CP provisioner
instead of the local Docker provisioner — dockerCli is nil.
- findContainer short-circuits to "" → 503 "container not running"
on every workspace, internal or external, on Railway-hosted
SaaS where workspaces actually live on EC2.
This PR strips the misleading gate so #2308 can be re-investigated
against the real symptom. The proper fix routes the multipart upload
over HTTP to the workspace's URL when dockerCli is nil — tracked
as a follow-up.
Refs #2308.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Symptom: pasting a screenshot into the canvas chat for a runtime="external"
workspace returned `503 {"error":"workspace container not running"}` —
accurate from the upload handler's POV (no container exists for external
workspaces) but misleading because it implies the container has crashed.
Fix: detect runtime="external" via DB lookup BEFORE the container-find
step and return 422 with:
- error: "file upload not supported for external workspaces"
- detail: explains why + points at admin/secrets workaround +
references issue #2308 for the v0.2 native-support roadmap
- runtime: "external" (machine-readable for clients)
Why 422 not 200/501:
- 422 = Unprocessable Entity — the request is well-formed but the
workspace's runtime can't accept it. Standard REST semantics.
- 200 with empty result would lie; 501 implies the API itself is
unimplemented (it's not — works for non-external workspaces); 503
was the misleading status this PR fixes.
Verified via live E2E against localhost:
- Created `runtime=external,external=true` workspace
- Posted multipart to /workspaces/:id/chat/uploads
- Got 422 with the expected structured body
Unit test (`chat_files_external_test.go`) pins the contract via sqlmock
+ httptest. Notable: the handler is constructed with `templates: nil`
to prove the runtime check happens BEFORE any docker plumbing — if a
future change moves the check below findContainer, the test crashes
on nil-deref instead of silently regressing.
Out of scope (for v0.2 follow-up):
- Native external-workspace file ingest via artifacts table or the
channel-plugin's inbox/ pattern. Requires separate design pass.
Closes#2308
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds a third snippet alongside externalCurlTemplate / externalPythonTemplate
in workspace-server/internal/handlers/external_connection.go: the new
externalChannelTemplate guides operators through installing the Claude Code
channel plugin (Molecule-AI/molecule-mcp-claude-channel — scaffolded today)
and dropping the .env config for it.
Wires the new snippet into the external-workspace POST /workspaces response
under key `claude_code_channel_snippet`, alongside the existing
`curl_register_template` and `python_snippet`. Canvas's "external workspace
created" modal can render it as a third tab.
CONTRIBUTING.md gains a short "External integrations" section pointing at
the three peer repos (workspace-runtime, sdk-python, mcp-claude-channel)
so contributors know where related runtime artifacts live and to consider
downstream impact when changing the A2A wire shape.
The plugin itself is scaffolded at commit d07363c on the new repo's main
branch; v0.1 is polling-based via the /activity?since_secs= filter shipped
in PR #2300. README + roadmap details there.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The harness runner (scripts/measure-coordinator-task-bounds-runner.sh)
calls `/workspaces/:id/activity?since_secs=$A2A_TIMEOUT` to scope a
trace to a specific test window. The query param was silently
ignored — `ActivityHandler.List` accepted only `type`, `source`, and
`limit`, so the runner got the most-recent-100 events regardless of
how long ago they happened. Works for fresh-tenant tests where
activity_logs is ~empty pre-run, breaks on busy tenants and on tests
that exceed 100 events.
Adds `since_secs` parsing with three behaviors:
- Valid positive int → `AND created_at >= NOW() - make_interval(secs => $N)`
on the SQL. Parameterised; values bound via lib/pq, not interpolated.
`make_interval(secs => $N)` is required — the `INTERVAL '$N seconds'`
literal form rejects placeholder substitution inside the string.
- Above 30 days (2_592_000s) → silently clamped to the cap. Defends
against a paranoid client triggering a multi-month full-table scan
via `since_secs=999999999`.
- Negative, zero, or non-integer → 400 with a structured error, NOT
silently dropped. Silent drop is exactly the bug this is fixing
— a typoed param shouldn't be lost as most-recent-100.
Tests cover all four paths: accepted (with arg-binding assertion via
sqlmock.WithArgs), clamped at 30 days, invalid rejected (5 sub-cases),
and omitted (verifies no extra clause / arg leak via strict WithArgs
count).
RFC #2251 §V1.0 step 6 (platform-side-transition audit) also depends
on this for time-window filtering of activity_logs.
Closes#2268
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Per-workspace `restartState` entries (introduced under the name
`restartMu` pre-#2266, renamed to `restartStates` in #2266) are
created via `LoadOrStore` in `workspace_restart.go` but never
deleted. On a long-running platform process serving many short-lived
workspaces (E2E tests, transient sandbox tenants), the sync.Map grows
monotonically — ~16 bytes per workspace ever created.
Fix: call `restartStates.Delete(wsID)` after stopAndRemove +
ClearWorkspaceKeys for each cascaded descendant and the parent. Mirrors
the existing per-ID cleanup loop. `sync.Map.Delete` is safe on absent
keys, so workspaces that were never restarted (no LoadOrStore call)
are no-op.
This is a pre-existing leak — #2266 did not introduce it; just renamed
the holder. Filing as a separate commit to keep the change minimal and
reviewable.
Closes#2269
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The pre-#2290 \`force: true\` flag on POST /org/import skipped the
required-env preflight, letting orgs import without their declared
required keys (e.g. ANTHROPIC_API_KEY). The ux-ab-lab incident: that
import path was used, the org shipped without ANTHROPIC_API_KEY in
global_secrets, and every workspace 401'd on the first LLM call.
Per #2290 picks (C/remove/both):
- Q1=C: template-derived required_env (no schema change — already
the existing aggregation via collectOrgEnv).
- Q2=remove: drop the bypass entirely. The seed/dev-org flow that
legitimately needs to skip becomes a separate dry-run-import path
with its own audit trail, not a permission bypass.
- Q3=block-at-import-only: provision-time drift logging is a
follow-up; for this PR, blocking at import is the gate.
Surface change:
- Force field removed from POST /org/import request body.
- 412 \"suggestion\" text drops the \"or pass force=true\" guidance.
- Legacy callers sending {\"force\": true} are silently tolerated
(Go's json.Unmarshal drops unknown fields), so no client-side
breakage; the bypass effect is just gone.
Audited callers in this repo:
- canvas/src/components/TemplatePalette.tsx — never sends force.
- scripts/post-rebuild-setup.sh — never sends force.
- Only external tooling sent force=true. Those callers must now set
the global secret via POST /settings/secrets before importing.
Adds TestOrgImport_ForceFieldRemoved as a structural pin: if a future
change re-adds Force to the body struct, the test fails and forces an
explicit reckoning with the #2290 rationale.
Closes#2290
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Self-review caught a regression I introduced in #2266: if cycle() panics
(e.g. a future provisionWorkspace nil-deref or any runtime error from
the DB / Docker / encryption stacks it touches), the loop never reaches
`state.running = false`. The flag stays true forever, the early-return
guard at the top of coalesceRestart fires for every subsequent call,
and that workspace is permanently locked out of restarts until the
platform process restarts.
The pre-fix code had similar exposure (panic killed the goroutine
before defer wsMu.Unlock() ran in some Go versions), but my pending-
flag version made it worse: the guard is sticky, not ephemeral.
Fix: defer the state-clear so it always runs on exit, including panic.
Recover (and DON'T re-raise) so the panic doesn't propagate to the
goroutine boundary and crash the whole platform process — RestartByID
is always called via `go h.RestartByID(...)` from HTTP handlers, and
an unrecovered goroutine panic in Go terminates the program. Crashing
the platform for every tenant because one workspace's cycle panicked
is the wrong availability tradeoff. The panic message + full stack
trace via runtime/debug.Stack() are still logged for debuggability.
Regression test in TestCoalesceRestart_PanicInCycleClearsState:
1. First call's cycle panics. coalesceRestart's defer must swallow
the panic — assert no panic propagates out (would crash the
platform process from a goroutine in production).
2. Second call must run a fresh cycle (proves running was cleared).
All 7 tests pass with -race -count=10.
Surfaced via /code-review-and-quality self-review of #2266; the
re-raise-after-recover anti-pattern (originally argued as "don't
mask bugs") came up in the comprehensive review and was corrected
to log-with-stack-and-suppress for availability.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The naive mutex-with-TryLock pattern in RestartByID was silently dropping
the second of two close-together restart requests. SetSecret and SetModel
both fire `go restartFunc(...)` from their HTTP handlers, and both DB
writes commit before either restart goroutine reaches loadWorkspaceSecrets.
If the second goroutine arrives while the first holds the per-workspace
mutex, TryLock returns false and the second is logged-and-dropped:
Auto-restart: skipping <id> — restart already in progress
The first goroutine's loadWorkspaceSecrets ran before the second write
committed, so the new container boots without that env var. Surfaced
during the RFC #2251 V1.0 measurement as hermes returning "No LLM
provider configured" when MODEL_PROVIDER landed after the API-key write
and lost its restart to the mutex (HERMES_DEFAULT_MODEL absent →
start.sh fell back to nousresearch/hermes-4-70b → derived
provider=openrouter → no OPENROUTER_API_KEY → request-time error).
The same race hits any back-to-back secret/model save flow including
the canvas's "set MiniMax key + pick model" UX.
Fix: pending-flag / coalescing pattern. Any restart request that arrives
while one is in flight sets `pending=true` and returns. The in-flight
runner, on completion, checks the flag and runs another cycle. This
collapses N concurrent requests into at most 2 sequential cycles (the
current one + one more that picks up everyone who arrived during it),
while guaranteeing the final container always sees the latest secrets.
Concrete contract:
- 1 request, no concurrency: 1 cycle
- N concurrent requests during 1 in-flight cycle: 2 cycles total
- N sequential requests (no overlap): N cycles
- Per-workspace state — different workspaces never serialize
Coalescing is extracted into `coalesceRestart(workspaceID, cycle func())`
so the gate logic is testable without the full WorkspaceHandler / DB /
provisioner stack. RestartByID now wraps that with the production cycle
function. runRestartCycle calls provisionWorkspace SYNCHRONOUSLY (drops
the historical `go`) so the loop's pending-flag check happens AFTER the
new container is up — without that, the next cycle's Stop call would
race the previous cycle's still-spawning provision goroutine.
sendRestartContext stays async; it's a one-way notification.
Tests in workspace_restart_coalesce_test.go cover all five contract
points + race-detector clean over 10 iterations:
- Single call → 1 cycle
- 5 concurrent during in-flight → exactly 2 cycles total
- 3 sequential → 3 cycles
- Pending-during-cycle picked up (targeted bug repro)
- State cleared after drain (running flag reset)
- Per-workspace isolation (no cross-workspace serialization)
Refs: molecule-core#2256 (V1.0 gate measurement); root cause for the
"No LLM provider configured" symptom seen during hermes/MiniMax repro.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Consolidates 11 of the 17 open Dependabot PRs (#2215, #2217, #2219-#2225,
#2227, #2229) into one PR. Every entry is a patch / minor / floor bump
where the impact surface is small and CI carries the proof.
Same pattern as the 2026-04-15 batch.
Go (workspace-server/go.mod + go.sum, regenerated via `go mod tidy`):
- golang.org/x/crypto 0.49.0 → 0.50.0 (#2225)
- github.com/golang-jwt/jwt/v5 5.2.2 → 5.3.1 (#2222)
- github.com/gin-contrib/cors 1.7.2 → 1.7.7 (#2220)
- github.com/docker/go-connections 0.6.0 → 0.7.0 (#2223)
- github.com/redis/go-redis/v9 9.7.3 → 9.19.0 (#2217)
Python floor bumps (workspace/requirements.txt; current pip-resolved
versions don't change unless they happen to be below the new floor):
- httpx >=0.27 → >=0.28.1 (#2221)
- uvicorn >=0.30 → >=0.46 (#2229)
- temporalio >=1.7 → >=1.26 (#2227)
- websockets >=12 → >=16 (#2224)
- opentelemetry-sdk >=1.24 → >=1.41.1 (#2219)
GitHub Actions (SHA-pinned per existing convention):
- dorny/paths-filter@d1c1ffe (v3) → @fbd0ab8 (v4.0.1) (#2215)
REMOVED from this batch (lockfile platform mismatch):
- #2231 @types/node ^22 → ^25.6 (npm install on macOS strips
Linux-only @emnapi/* entries from package-lock.json that CI's
`npm ci` then refuses; needs a Linux-side install to land cleanly)
- #2230 jsdom ^25 → ^29.1 (same)
NOT included in this batch (deferred to per-PR human review):
- #2228 github/codeql-action v3 → v4 (CodeQL CLI alignment risk)
- #2218 actions/setup-node v4 → v6 (default Node version drift)
- #2216 actions/upload-artifact v4 → v7 (3 major versions)
- #2214 actions/setup-python v5 → v6 (action major)
NOT merged (CI failing on dependabot's own PR):
- #2233 next 15 → 16
- #2232 tailwindcss 3 → 4
- #2226 typescript 5 → 6
Verified:
- workspace-server: `go mod tidy && go build ./... && go test ./...` — green
- workspace requirements.txt: floor bumps only
Independent code review caught a real bug in the previous commit's
stale-token revoke pass. The platform's restart endpoint
(workspace_restart.go:104) Stops the workspace container synchronously
then dispatches re-provisioning to a goroutine (line 173). For a
workspace that's been idle past the 5-minute grace window — extremely
common: user comes back to a long-idle workspace and clicks Restart —
this opens a race window:
1. Container stopped → ListWorkspaceContainerIDPrefixes returns no
entry → workspace becomes a stale-token candidate.
2. issueAndInjectToken runs in the goroutine: revokes old tokens,
issues a fresh one, writes it to /configs/.auth_token.
3. If the sweeper's predicate-only UPDATE
`WHERE workspace_id = $1 AND revoked_at IS NULL` runs AFTER
IssueToken commits but is racing the SELECT-then-UPDATE window,
it revokes the freshly-issued token alongside the old ones.
4. Container starts with a now-revoked token → 401 forever.
The fix carries the SAME staleness predicate from the SELECT into the
per-workspace UPDATE: a token created within the grace window can't
match `< now() - grace` and is automatically excluded. The operation
is now idempotent against fresh inserts.
Also addresses other findings from the same review:
- Add `status NOT IN ('removed', 'provisioning')` to the SELECT
(R2 + first-line C1 defence). 'provisioning' is set synchronously
in workspace_restart.go before the async re-provision begins, so
it's a reliable in-flight signal that narrows the candidate set.
- Stop calling wsauth.RevokeAllForWorkspace from the sweeper —
that helper revokes EVERY live token unconditionally; the sweeper
needs "every STALE live token" which is a different (safer)
operation. Inline the UPDATE so we own the predicate end-to-end.
Drop the wsauth import (no longer needed in this package).
- Tighten expectStaleTokenSweepNoOp regex to anchor at start and
require the status filter, so a future query whose first line
coincidentally starts with "SELECT DISTINCT t.workspace_id" can't
silently absorb the helper's expectation (R3).
- Defensive `if reaper == nil { return }` at top of
sweepStaleTokensWithoutContainer — even though StartOrphanSweeper
already short-circuits on nil, a future refactor that wires this
pass directly without checking would otherwise mass-revoke in
CP/SaaS mode (F2).
- Comment in the function explaining why empty likes is intentionally
NOT a short-circuit (asymmetry with the first two passes is the
whole point — "no containers running" is the load-bearing case).
- Add TestSweepOnce_StaleTokenRevokeUsesStalenessPredicate that
asserts the UPDATE shape (predicate present, grace bound). A
real-Postgres integration test would prove the race resolution
end-to-end; this catches the regression where someone simplifies
the UPDATE back to predicate-only.
- Add TestSweepStaleTokens_NilReaperEarlyExit pinning the F2 guard.
Existing tests updated to match the new query/UPDATE shape with tight
regexes that pin all the safety guards (status filter, staleness
predicate in both SELECT and UPDATE).
Full Go suite green.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Heals the user-reported "auth token conflict after volume wipe" failure
mode. When an operator nukes a workspace's /configs volume outside the
platform's restart endpoint (common via `docker compose down -v` or
manual cleanup scripts), the DB still holds live workspace_auth_tokens
for that workspace while the recreated container has an empty
/configs/.auth_token. Subsequent /registry/register calls 401 forever:
requireWorkspaceToken sees live tokens, container has no token to
present, and the workspace is permanently wedged until an operator
manually revokes via SQL.
The platform's restart endpoint already handles this correctly via
wsauth.RevokeAllForWorkspace inside issueAndInjectToken. This change
adds a third orphan-sweeper pass — sweepStaleTokensWithoutContainer —
as the safety net for the equivalent action taken outside the API.
Detection criterion: workspace has at least one live (non-revoked)
token whose most-recent activity (COALESCE(last_used_at, created_at))
is older than staleTokenGrace (5 minutes), AND no live Docker
container's name prefix matches the workspace ID.
Safety filters that bound the revoke radius:
1. Only runs in single-tenant Docker mode. The orphan sweeper is
wired only when prov != nil in cmd/server/main.go — CP/SaaS mode
never gets here, so an empty container list cannot be confused
with "no Docker at all" (which would otherwise revoke every
workspace's tokens in production SaaS).
2. staleTokenGrace = 5min skips tokens issued/used in the last
5 minutes. Bounds the race with mid-provisioning (token issued
moments before docker run completes) and brief restart windows
— a healthy workspace touches last_used_at every 30s heartbeat,
so 5min is 10× the heartbeat interval.
3. The query joins workspaces.status != 'removed' so deleted
workspaces are not revoked here (handled at delete time by the
explicit RevokeAllForWorkspace call).
4. make_interval(secs => $2) avoids a time.Duration.String() →
"5m0s" mismatch with Postgres interval grammar that I caught
during implementation.
5. Each revocation logs the workspace ID so operators can correlate
"workspace just lost auth" with this sweeper, not blame a
network blip.
Failure mode: revoke fails (transient DB error). Loop bails to avoid
log spam; next 60s cycle retries. Worst case a workspace stays
401-blocked an extra minute.
Tests: 5 new tests covering the headline scenario, the safety gate
(workspace with container is NOT revoked), revoke-failure-bails-loop,
query-error-non-fatal, and Docker-list-failure-skips-cycle. All 11
existing sweepOnce tests updated to register the new third-pass query
expectation via a small `expectStaleTokenSweepNoOp` helper that keeps
their existing assertions readable.
Full Go test suite green: registry, wsauth, handlers, and all other
packages.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Canvas Agent Comms bubbles for outbound delegation showed only
"Delegating to <peer>" boilerplate during the live update window —
the actual task text only surfaced after a refresh re-fetched the row
from /workspaces/:id/activity. Symptom flagged today during a fresh
delegation manual test where the bubble said "Delegating to Perf
Auditor" instead of the user's "audit moleculesai.app for
performance" prompt.
Root cause: LogActivity's broadcast payload at activity.go:510-518
deliberately omitted request_body and response_body, so the canvas's
live-update path (AgentCommsPanel.tsx:271-289) saw `p.request_body =
undefined` and toCommMessage fell back to the
`Delegating to ${peerName}` template string. The DB row stored the
real task / reply, which is why GET-on-mount worked.
Fix: include both bodies in the broadcast as json.RawMessage values
(no re-marshal cost — they were already encoded for the DB insert
above). Same pattern as tool_trace, which has been included since #1814.
Each side is bounded by the workspace-side caller's own caps: the
runtime's report_activity helper caps error_detail at 4096 chars and
summary at 256; request/response are constrained by the runtime's
own limits — typical delegate_task payload is hundreds of chars to a
few KB. If a much-larger broadcast becomes a concern later, a soft
cap can be added at this site without breaking the contract.
Two regression tests pin the broadcast shape:
- request_body present → canvas renders the actual task text
- response_body present → canvas renders the actual reply text
- response_body nil → omitted from payload (no empty-bubble flicker)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Cascade-deleting a 7-workspace org returned 500 with
"workspace marked removed, but 2 stop call(s) failed — please retry:
stop eeb99b5d-...: force-remove ws-eeb99b5d-607: Error response
from daemon: removal of container ws-eeb99b5d-607 is already in
progress"
even though the DB-side post-condition succeeded (removed_count=7) and
the containers WERE removed shortly after. The fanout fired Stop() on
every workspace concurrently and the orphan sweeper happened to reap
two of them at the same instant, so Docker rejected the second
ContainerRemove with "removal already in progress" — a race-condition
ack, not a real failure. Retrying just races the same in-flight
removal.
The post-condition we care about (the container WILL be gone) is
identical to a successful removal, so Stop() should treat it the
same way it already treats "No such container" — a no-op return nil
that lets the caller proceed with volume cleanup. Real daemon
failures (timeout, EOF, ctx cancel) still surface as errors.
Two pieces:
- New isRemovalInProgress() predicate using the same string-match
approach as isContainerNotFound (docker/docker has no typed
errdef for this; the CLI itself relies on the message).
- Stop() now treats the predicate as success, with a log line
distinct from the not-found path so debugging can tell which
race fired.
Both substrings ("removal of container" + "already in progress") must
match — "already in progress" alone would false-positive on unrelated
operations like image pulls. Truth table pinned in 7 new test cases.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the LOW-severity dependabot alert on workspace-server's go-redis
pin. Upstream advisory GHSA-92cp-5422-2mw7: "go-redis allows potential
out-of-order responses when CLIENT SETINFO times out" — fixed in 9.7.3.
Patch bump within the v9.7 line; semver guarantees no API change.
Full workspace-server test suite passes (18/18 packages clean).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the medium-severity dependabot alert #7 on workspace-server's
docker pin: "Moby firewalld reload makes published container ports
accessible from remote hosts" — fixed in v28.3.3, pulling v28.5.2
(latest in the v28 line).
Patch+minor bump within the v28 train; no client-API breaks
(workspace-server only uses docker.Client for container exec /
inspect, all stable since v20+).
Verification: full workspace-server test suite passes (18/18 packages
clean). Build clean.
Out of scope:
- Alerts #10 and #11 (the AuthZ bypass + plugin-priv off-by-one)
require v29.3.1, which is not yet published to the Go module
proxy (latest published is v28.5.2). They'll close in a follow-up
PR once v29 lands as a Go module.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the HIGH-severity dependabot alert on workspace-server's jwt-go
pin. Upstream advisory GHSA-mh63-6h87-95cp / CVE-2025-30204:
"jwt-go allows excessive memory allocation during header parsing" —
fixed in v5.2.2.
Patch bump within the v5.x line; semver guarantees no API change. Full
workspace-server test suite passes (\`go test ./...\` clean across all
18 packages).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the second of two skipped tests in workspace_provision_test.go
that were blocked on interface refactors. The Broadcaster + CP
provisioner halves landed in earlier #1814 cycles; this is the
plugin-source-registry half.
Refactor:
- Add handlers.pluginSources interface with the 3 methods handler
code actually calls (Register, Resolve, Schemes)
- Compile-time assertion `var _ pluginSources = (*plugins.Registry)(nil)`
catches future method-signature drift at build time
- PluginsHandler.sources narrowed from *plugins.Registry to the
interface; production wiring (NewPluginsHandler, WithSourceResolver)
still passes *plugins.Registry — satisfies the interface
Production fix (#1206 leak):
- resolveAndStage's Fetch-failure path was interpolating err.Error()
into the HTTP response body via `failed to fetch plugin from %s: %v`.
Resolver errors routinely contain rate-limit text, github request
IDs, raw HTTP body fragments, and (for local resolvers) file system
paths — none has any business landing in a user's browser.
- Body now carries just `failed to fetch plugin from <scheme>`; the
status code already differentiates the failure shape (404 not
found, 504 timeout, 502 generic). Full err detail stays in the
server-side log line one statement above.
Test:
- 6 sub-tests covering every error path inside resolveAndStage:
empty source, invalid format, unknown scheme, local
path-traversal, unpinned github (PLUGIN_ALLOW_UNPINNED unset),
Fetch failure with a leaky synthetic error
- The Fetch-failure case plants 5 realistic leak markers in the
resolver's error string (rate limit text, x-github-request-id,
auth_token, ghp_-prefixed token, /etc/passwd path); the assertion
fails if ANY appears in the response body
- Table-driven so a future error path added to resolveAndStage gets
one new row, not a copy-paste of the assertion logic
Verification:
- 6/6 sub-tests pass
- Full workspace-server test suite passes (interface refactor is
non-breaking; production caller paths unchanged)
- go build ./... clean
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The skipped test exists to assert that provisionWorkspaceCP never
leaks err.Error() in WORKSPACE_PROVISION_FAILED broadcasts (regression
guard for #1206). Writing the test body required substituting a
failing CPProvisioner — but the handler's `cpProv` field was the
concrete *CPProvisioner type, so a mock had nowhere to plug in.
Refactor:
- Add provisioner.CPProvisionerAPI interface with the 3 methods
handlers actually call (Start, Stop, GetConsoleOutput)
- Compile-time assertion `var _ CPProvisionerAPI = (*CPProvisioner)(nil)`
catches future method-signature drift at build time
- WorkspaceHandler.cpProv narrowed to the interface; SetCPProvisioner
accepts the interface (production caller passes *CPProvisioner
from NewCPProvisioner unchanged)
Test:
- stubFailingCPProv whose Start returns a deliberately leaky error
(machine_type=t3.large, ami=…, vpc=…, raw HTTP body fragment)
- Drive provisionWorkspaceCP via the cpProv.Start failure path
- Assert broadcast["error"] == "provisioning failed" (canned)
- Assert no leak markers (machine type, AMI, VPC, subnet, HTTP
body, raw error head) in any broadcast string value
- Stop/GetConsoleOutput on the stub panic — flags a future
regression that reaches into them on this path
Verification:
- Full workspace-server test suite passes (interface refactor
is non-breaking; production caller path unchanged)
- go build ./... clean
- The other skipped test in this file (TestResolveAndStage_…)
is a separate plugins.Registry refactor and remains skipped
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
When a target workspace's adapter has declared
provides_native_session=True (claude-code SDK's streaming session,
hermes-agent's in-container event log), the SDK owns its own queue/
session state. Adding the platform's a2a_queue layer on top would
double-buffer the same in-flight state — and worse, the platform
queue's drain timing has no relationship to the SDK's actual readiness,
so the queued request might dispatch while the SDK is STILL busy.
Behavior change: in handleA2ADispatchError, when isUpstreamBusyError(err)
fires and the target declared native_session, return 503 + Retry-After
directly without enqueueing. The caller's adapter handles retry on
its own schedule, and the SDK's own queue absorbs the request when
ready. Response body carries native_session=true so callers can
distinguish this from queue-failure 503s.
Observability is preserved: logA2AFailure still runs above; the
broadcaster still fires; the activity_logs row records the busy event
just like the platform-fallback path.
This is the consumer that validates the template-side declarations
already shipped in:
- molecule-ai-workspace-template-claude-code PR #12
- molecule-ai-workspace-template-hermes PR #25
Once those merge + image tags bump, claude-code + hermes workspaces'
busy 503s skip the platform queue end-to-end. End-to-end validation
of capability primitive #5.
Tests (2 new):
- NativeSession_SkipsEnqueue: cache pre-populated, deliberate
sqlmock with NO INSERT INTO a2a_queue expected — implicit
regression cover (sqlmock fails on unexpected queries). Asserts
503 + Retry-After + native_session=true marker in body.
- NoNativeSession_StillEnqueues: negative pin — empty cache, same
busy error → falls through to EnqueueA2A (which fails in this
test, falls through to legacy 503 without native_session marker).
Verification:
- All Go handlers tests pass (2 new + existing)
- go build + go vet clean
See project memory `project_runtime_native_pluggable.md`.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
When an adapter declares provides_native_status_mgmt=True (because its
SDK reports its own ready/degraded/failed state explicitly), the
platform's error-rate-based status inference fights the adapter's own
state machine. This PR gates the inference branches on the capability
flag — adapter-driven transitions become authoritative.
Components:
- registry.go evaluateStatus: gate the two inferred-status branches
(online → degraded when error_rate ≥ 0.5; degraded → online when
error_rate < 0.1 and runtime_state is empty) behind a check of
runtimeOverrides.HasCapability("status_mgmt").
- The wedged-branch (RuntimeState == "wedged" → degraded) is NOT
gated. That path is the adapter's OWN self-report, not platform
inference, and stays active under native_status_mgmt — adapters
can still drive transitions via runtime_state.
Python side: no change. The capability map is already serialized via
RuntimeCapabilities.to_dict() in PR #2137 and sent in the heartbeat's
runtime_metadata block via PR #2139. An adapter setting
RuntimeCapabilities(provides_native_status_mgmt=True) automatically
flows through.
Tests (3 new):
- SkipsDegradeInference: error_rate=0.8 + currentStatus=online + native
flag set → degrade UPDATE does NOT fire (sqlmock fails on unexpected
query, which is the regression cover)
- SkipsRecovery: error_rate=0.05 + currentStatus=degraded + native →
recovery UPDATE does NOT fire
- WedgedStillRespected: runtime_state="wedged" + native → wedged
branch DOES fire (adapter self-report stays active)
Verification:
- All Go handlers tests pass (3 new + existing)
- 1308/1308 Python pytest pass (unchanged — Python side unmodified)
- go build + go vet clean
Stacked on #2140 (already merged via cascade); branch is current with
staging since #2139 and #2140 merged.
See project memory `project_runtime_native_pluggable.md`.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
When an adapter declares provides_native_scheduler=True (because its
SDK has built-in cron / Temporal-style workflows), the platform's
polling loop must skip firing schedules for that workspace — otherwise
the schedule fires twice (once natively, once via platform). The
native skip preserves observability (next_run_at still advances, the
schedule row stays in the DB, last_run_at would still update) while
moving the FIRE responsibility to the SDK.
Stacked on PR #2139 (idle_timeout_override end-to-end). The
RuntimeMetadata heartbeat block already carries the capability map;
this PR teaches the platform how to read and act on the scheduler bit.
Components:
- handlers/runtime_overrides.go: extended the cache to store
capability flags alongside idle timeout. Two heartbeat fields are
independent — SetIdleTimeout / SetCapabilities each update one
without stomping the other. Defensive copy on SetCapabilities so
a caller mutating its map after the call doesn't retroactively
change cached declarations. Empty entries dropped to avoid stale
husks.
- handlers/runtime_overrides.go: new HasCapability(workspaceID, name)
+ ProvidesNativeScheduler(workspaceID) — the latter is the
package-level adapter the scheduler imports (avoids a
handlers/scheduler import cycle).
- handlers/registry.go: heartbeat handler now calls SetCapabilities
in addition to SetIdleTimeout.
- scheduler/scheduler.go: NativeSchedulerCheck function-pointer DI
(mirrors the existing QueueDrainFunc pattern). New() leaves the
field nil so existing callers preserve today's "always fire"
behavior. SetNativeSchedulerCheck wires production. tick() drops
workspaces declaring native ownership before goroutine fan-out;
advances next_run_at so we don't tight-loop on the same row.
- cmd/server/main.go: wires handlers.ProvidesNativeScheduler into
the cron scheduler at server boot.
Tests:
Go (7 new):
- SetCapabilitiesAndHas (round-trip)
- per-workspace isolation (ws-a's declaration doesn't leak to ws-b)
- nil/empty map clears (adapter dropping the flag restores fallback)
- SetCapabilities is a defensive copy (caller mutation can't
retroactively flip cached value)
- SetIdleTimeout preserves capabilities and vice-versa (two-field
independence)
- empty entry deleted (no stale husks)
- ProvidesNativeScheduler reads the same singleton heartbeat writes
- SetNativeSchedulerCheck wires the function (scheduler-side)
- nil-check safety contract for tick
Python: no change needed — the heartbeat already serializes the
full capability map via _runtime_metadata_payload (PR #2139). An
adapter setting RuntimeCapabilities(provides_native_scheduler=True)
automatically flows through.
Verification:
- 1308 / 1308 Python pytest pass (unchanged)
- All Go handlers + scheduler tests pass
- go build + go vet clean
See project memory `project_runtime_native_pluggable.md`.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Capability primitive #2 (task #117). The first cross-cutting capability
where the adapter actually displaces platform behavior — claude-code's
streaming session can legitimately go silent for 8+ minutes during
synthesis + slow tool calls; the platform's hardcoded 5min idle timer
in a2a_proxy.go cancels it mid-flight (the bug PR #2128 patched at
the env-var layer). This PR fixes it at the right layer: the adapter
declares "I need 600s" and the platform's dispatch path honors it.
Wire shape (Python → Go):
POST /registry/heartbeat
{
"workspace_id": "...",
...
"runtime_metadata": {
"capabilities": {"heartbeat": false, "scheduler": false, ...},
"idle_timeout_seconds": 600 // optional, omitted = use default
}
}
Default behavior preserved: any adapter that doesn't override
BaseAdapter.idle_timeout_override() (returns None by default) sends
no idle_timeout_seconds field; the Go side falls through to
idleTimeoutDuration (env A2A_IDLE_TIMEOUT_SECONDS, default 5min).
Existing langgraph / crewai / deepagents workspaces are unaffected.
Components:
Python:
- adapter_base.py: idle_timeout_override() method on BaseAdapter
returning None (the platform-default sentinel).
- heartbeat.py: _runtime_metadata_payload() lazy-imports the active
adapter and assembles the capability + override block. Try/except
swallows ANY error so heartbeat never breaks because of capability
discovery — observability outranks capability accuracy.
Go:
- models.HeartbeatPayload.RuntimeMetadata (pointer so absent =
"old runtime, didn't say"; explicit zero-cap = "new runtime,
declared no native ownership").
- handlers.runtimeOverrides: in-memory sync.Map cache keyed by
workspaceID. Populated by the heartbeat handler, consulted on
every dispatchA2A. Reset on platform restart (worst-case 30s of
platform-default behavior — acceptable; nothing about overrides
is correctness-critical).
- a2a_proxy.dispatchA2A: looks up the override before applyIdle
Timeout; falls through to global default when absent.
Tests:
Python (17, all new):
- RuntimeCapabilities dataclass shape (frozen, defaults, wire keys)
- BaseAdapter.capabilities() default + override + sibling isolation
- idle_timeout_override default, positive override, dropped-override
- Heartbeat metadata producer: default adapter emits all-False,
native adapter emits flag + override, missing ADAPTER_MODULE
returns {} (graceful), zero/negative override is omitted from
wire, exception inside adapter swallowed
Go (6, all new):
- SetIdleTimeout + IdleTimeout round-trip
- Zero/negative duration clears the override
- Empty workspace_id ignored
- Replacement (heartbeat overwrites prior value)
- Reset clears entire cache
- Concurrent reads + writes (sync.Map invariant)
Verification:
- 1308 / 1308 workspace pytest pass (was 1300, +8)
- All Go handlers tests pass (6 new + existing)
- go vet clean
See project memory `project_runtime_native_pluggable.md` for the
architecture principle this implements.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
[Molecule-Platform-Evolvement-Manager]
## What this fixes
Closes one of the three skipped tests in workspace_provision_test.go
that #1814's interface refactor enabled but never had a body written:
`TestProvisionWorkspace_NoInternalErrorsInBroadcast`.
The interface blocker (`captureBroadcaster` couldn't substitute for
`*events.Broadcaster`) was already fixed when `events.EventEmitter`
was extracted; this PR ships the test body that the prior refactor
made possible. The test was effectively unverified regression cover
for issue #1206 (internal error leak in WORKSPACE_PROVISION_FAILED
broadcasts) until now.
## What the test pins
Drives the **earliest** failure path in `provisionWorkspace` — the
global-secrets decrypt failure — so the setup needs only:
- one `global_secrets` mock row (with `encryption_version=99` to
force `crypto.DecryptVersioned` to error with a string that
includes the literal version number)
- one `UPDATE workspaces SET status = 'failed'` expectation
- a `captureBroadcaster` (already in the test file) injected via
`NewWorkspaceHandler`
Asserts the captured `WORKSPACE_PROVISION_FAILED` payload:
1. carries the safe canned `"failed to decrypt global secret"` only
2. does NOT contain `"version=99"`, `"platform upgrade required"`,
or the global_secret row's `key` value (`FAKE_KEY`) — the three
leak markers a regression that interpolates `err.Error()` into
the broadcast would surface
## Why not use containsUnsafeString
The test file already has a `containsUnsafeString` helper with
`"secret"` and `"token"` in its prohibition list. Those substrings
match the legitimate redacted message (`"failed to decrypt global
secret"`) — appropriate in user-facing copy, NOT a leak. Using the
broad helper would either fail the test against the source's own
correct message OR require loosening the helper for everyone else.
Per-test explicit leak markers keep the assertion precise without
weakening shared infrastructure.
## What's still skipped (out of scope for this PR)
- `TestProvisionWorkspaceCP_NoInternalErrorsInBroadcast` — same
shape but blocked on a different refactor: `provisionWorkspaceCP`
routes through `*provisioner.CPProvisioner` (concrete pointer,
no interface), so the test would need either an interface
extraction or a real CPProvisioner with a mocked HTTP server.
Larger scope; deferred.
- `TestResolveAndStage_NoInternalErrorsInHTTPErr` — different
blocker (`mockPluginsSources` vs `*plugins.Registry` type
mismatch). Needs a SourceResolver-side interface refactor.
Both still carry their `t.Skip` notes documenting the remaining
work.
## Test plan
- [x] New test passes
- [x] Full handlers package suite still green (`go test ./internal/handlers/`)
- [x] No changes to production code — pure test addition
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two Critical bugs caught in code review of the agent→user attachments PR:
1. **Empty-URI attachments slipped past validation.** Gin's
go-playground/validator does NOT iterate slice elements without
`dive` — verified zero `dive` usage anywhere in workspace-server —
so the inner `binding:"required"` tags on NotifyAttachment.URI/Name
were never enforced. `attachments: [{"uri":"","name":""}]` would
pass validation, broadcast empty-URI chips that render blank in
canvas, AND persist them in activity_logs for every page reload to
re-render. Added explicit per-element validation in Notify (returns
400 with `attachment[i]: uri and name are required`) plus
defence-in-depth in the canvas filter (rejects empty strings, not
just non-strings).
3-case regression test pins the rejection.
2. **Hardcoded application/octet-stream stripped real mime types.**
`_upload_chat_files` always passed octet-stream as the multipart
Content-Type. chat_files.go:Upload reads `fh.Header.Get("Content-Type")`
FIRST and only falls back to extension-sniffing when the header is
empty, so every agent-attached file lost its real type forever —
broke the canvas's MIME-based icon/preview logic. Now sniff via
`mimetypes.guess_type(path)` and only fall back to octet-stream
when sniffing returns None.
Plus three Required nits:
- `sqlmockArgMatcher` was misleading — the closure always returned
true after capture, identical to `sqlmock.AnyArg()` semantics, but
named like a custom matcher. Renamed to `sqlmockCaptureArg(*string)`
so the intent (capture for post-call inspection, not validate via
driver-callback) is unambiguous.
- Test asserted notify call by `await_args_list[1]` index — fragile
to any future _upload_chat_files refactor that adds a pre-flight
POST. Now filter call list by URL suffix `/notify` and assert
exactly one match.
- Added `TestNotify_RejectsAttachmentWithEmptyURIOrName` (3 cases)
covering empty-uri, empty-name, both-empty so the Critical fix
stays defended.
Deferred to follow-up:
- ORDER BY tiebreaker for same-millisecond notifies — pre-existing
risk, not regression.
- Streaming multipart upload — bounded by the platform's 50MB total
cap so RAM ceiling is fixed; switch to streaming if cap rises.
- Symlink rejection — agent UID can already read whatever its
filesystem perms allow via the shell tool; rejecting symlinks
doesn't materially shrink the attack surface.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the gap where the Director would say "ZIP is ready at /tmp/foo.zip"
in plain text instead of attaching a download chip — the runtime literally
had no API for outbound file attachments. The canvas + platform's
chat-uploads infrastructure already supported the inbound (user → agent)
direction (commit 94d9331c); this PR wires the outbound side.
End-to-end shape:
agent: send_message_to_user("Done!", attachments=["/tmp/build.zip"])
↓ runtime
POST /workspaces/<self>/chat/uploads (multipart)
↓ platform
/workspace/.molecule/chat-uploads/<uuid>-build.zip
→ returns {uri: workspace:/...build.zip, name, mimeType, size}
↓ runtime
POST /workspaces/<self>/notify
{message: "Done!", attachments: [{uri, name, mimeType, size}]}
↓ platform
Broadcasts AGENT_MESSAGE with attachments + persists to activity_logs
with response_body = {result: "Done!", parts: [{kind:file, file:{...}}]}
↓ canvas
WS push: canvas-events.ts adds attachments to agentMessages queue
Reload: ChatTab.loadMessagesFromDB → extractFilesFromTask sees parts[]
Either path → ChatTab renders download chip via existing path
Files changed:
workspace-server/internal/handlers/activity.go
- NotifyAttachment struct {URI, Name, MimeType, Size}
- Notify body accepts attachments[], broadcasts in payload,
persists as response_body.parts[].kind="file"
canvas/src/store/canvas-events.ts
- AGENT_MESSAGE handler reads payload.attachments, type-validates
each entry, attaches to agentMessages queue
- Skips empty events (was: skipped only when content empty)
workspace/a2a_tools.py
- tool_send_message_to_user(message, attachments=[paths])
- New _upload_chat_files helper: opens each path, multipart POSTs
to /chat/uploads, returns the platform's metadata
- Fail-fast on missing file / upload error — never sends a notify
with a half-rendered attachment chip
workspace/a2a_mcp_server.py
- inputSchema declares attachments param so claude-code SDK
surfaces it to the model
- Defensive filter on the dispatch path (drops non-string entries
if the model sends a malformed payload)
Tests:
- 4 new Python: success path, missing file, upload 5xx, no-attach
backwards compat
- 1 new Go: Notify-with-attachments persists parts[] in
response_body so chat reload reconstructs the chip
Why /tmp paths work even though they're outside the canvas's allowed
roots: the runtime tool reads the bytes locally and re-uploads through
/chat/uploads, which lands the file under /workspace (an allowed root).
The agent can specify any readable path.
Does NOT include: agent → agent file transfer. Different design problem
(cross-workspace download auth: peer would need a credential to call
sender's /chat/download). Tracked as a follow-up under task #114.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Review-feedback follow-up. Pre-fix, A2A_IDLE_TIMEOUT_SECONDS=foo or =-30
fell back to the default with zero log signal — operator sets the wrong
value, sees "no effect," wastes hours debugging "why is my override not
working." Now bad-input cases log a clear message naming the variable,
the bad value, and the default applied.
Refactor: extract parseIdleTimeoutEnv(string) → time.Duration so the
parse logic is unit-testable. defaultIdleTimeoutDuration is a const so
tests reference it without re-deriving the value.
8 new unit tests cover empty / valid / negative / zero / non-numeric /
float / trailing-units inputs.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two compounding bugs caused the "context canceled" wave on 2026-04-26
(15+ failed user/agent A2A calls in 1hr across 6 workspaces, including
the user's "send it in the chat" message that the director never
received):
1. **a2a_proxy.go:applyIdleTimeout cancels the dispatch after 60s of
broadcaster silence** for the workspace. Resets on any SSE event
for the workspace, fires cancel() if no event arrives in time.
2. **registry.go:Heartbeat broadcast was conditional** —
`if payload.CurrentTask != prevTask`. The runtime POSTs
/registry/heartbeat every 30s, but if current_task hasn't changed
the handler emits ZERO broadcasts. evaluateStatus only broadcasts
on online/degraded transitions — also no-op when steady.
Net: a claude-code agent on a long packaging step or slow tool call
keeps the same current_task for >60s → no broadcasts → idle timer
fires → in-flight request cancelled mid-flight with the "context
canceled" error the user sees in the activity log.
Fix:
(a) Heartbeat handler always emits a `WORKSPACE_HEARTBEAT` BroadcastOnly
event (no DB write — same path as TASK_UPDATED). At the existing 30s
runtime cadence this resets the idle timer twice per minute.
Cost is one in-memory channel send per active SSE subscriber + one
WS hub fan-out per heartbeat — far below any noise floor.
(b) idleTimeoutDuration default bumped 60s → 5min as a safety net for
any future regression where the heartbeat path goes silent (e.g.
runtime crashed mid-request before its next heartbeat). Made
env-overridable via A2A_IDLE_TIMEOUT_SECONDS for ops who want to
tune (canary tests fail-fast, prod tenants with slow plugins want
longer). Either fix alone closes today's gap; both together is
defence in depth.
The runtime side already POSTs /registry/heartbeat every 30s via
workspace/heartbeat.py — no runtime change needed.
Test: TestHeartbeatHandler_AlwaysBroadcastsHeartbeat pins the property
that an SSE subscriber observes a WORKSPACE_HEARTBEAT broadcast on a
same-task heartbeat (the regression scenario). All 16 existing handler
tests still pass.
Doesn't fix: task #102 (single SDK session bottleneck) — peers will
still queue when busy. But this PR ensures the queue/wait flow
actually completes instead of being killed by the idle timer
mid-wait.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The existing sweeper only reaps ws-* containers whose workspace row
has status='removed'. That misses the entire wiped-DB case: an
operator does `docker compose down -v` (kills the postgres volume),
the previous platform's ws-* containers keep running, the new
platform boots into an empty workspaces table — first pass finds
zero candidates and those containers leak forever. Symptom users
hit today: 7 ws-* containers from 11h ago, no rows in DB, no
visibility in Canvas, eating CPU + memory.
Fix shape:
1. Provisioner stamps every ws-* container + volume with
`molecule.platform.managed=true`. Without a label, the sweeper
would have to assume any unlabeled ws-* container might belong
to a sibling platform stack on a shared Docker daemon.
2. Provisioner exposes ListManagedContainerIDPrefixes — a label-filter
counterpart to the existing name-filter.
3. Sweeper splits sweepOnce into two independent passes:
- sweepRemovedRows (unchanged behavior; status='removed' only)
- sweepLabeledOrphansWithoutRows (new; labeled containers whose
workspace_id has no row in the table at all)
Each pass has its own short-circuit so an empty result or transient
error in one doesn't block the other — load-bearing because the
wiped-DB pass exists precisely for cases where the removed-row
pass finds nothing.
Safe under multi-platform-on-shared-daemon: only containers carrying
our label get reaped, sibling stacks' containers are invisible to this
pass. (For now the label is a constant string; a future per-instance
UUID layer can refine "ours" further if a real shared-daemon scenario
emerges.)
Migration: existing platforms running pre-PR builds have UNLABELED
ws-* containers. After this lands they continue to NOT be reaped by
the new path (no label = invisible). They'll only be cleaned via
manual intervention or once the operator recreates them — same as
today. No regression.
Tests cover all five branches of the new pass: happy-path reap,
no-reap when row exists, mixed reap-some-keep-some, Docker error
short-circuits cleanly, non-UUID prefixes get filtered before the
SQL query.
Pairs with PR #2122 (script-level fix). Together they close the
orphan-leak path for both `bash scripts/nuke-and-rebuild.sh` users
(handled by the script) AND `docker compose down -v` users (handled
by the runtime).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds an opt-in goroutine that polls GHCR every 5 minutes for digest
changes on each workspace-template-*:latest tag and invokes the same
refresh logic /admin/workspace-images/refresh exposes. With this, the
chain from "merge runtime PR" to "containers running new code" is fully
hands-off — no operator step between auto-tag → publish-runtime →
cascade → template image rebuild → host pull + recreate.
Opt-in via IMAGE_AUTO_REFRESH=true. SaaS deploys whose pipeline already
pulls every release should leave it off (would be redundant work);
self-hosters get true zero-touch.
Why a refactor of admin_workspace_images.go is in this PR:
The HTTP handler held all the refresh logic inline. To share it with
the new watcher without HTTP loopback, extracted WorkspaceImageService
with a Refresh(ctx, runtimes, recreate) (RefreshResult, error) shape.
HTTP handler is now a thin wrapper; behavior is preserved (same JSON
response, same 500-on-list-failure, same per-runtime soft-fail).
Watcher design notes:
- Last-observed digest tracked in memory (not persisted). On boot the
first observation per runtime is seed-only — no spurious refresh
fires on every restart.
- On Refresh error, the seen digest rolls back so the next tick retries.
Without this rollback a transient Docker glitch would convince the
watcher the work was done.
- Per-runtime fetch errors don't block other runtimes (one template's
brief 500 doesn't pause the others).
- digestFetcher injection seam in tick() lets unit tests cover all
bookkeeping branches without standing up an httptest GHCR server.
Verified live: probed GHCR's /token + manifest HEAD against
workspace-template-claude-code; got HTTP 200 + a real
Docker-Content-Digest. Same calls the watcher makes.
Co-authored-by: Hongming Wang <hongmingwangalt@gmail.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
CI on PR #2105 caught 7 Create-handler tests still mocking the
pre-#1408 10-arg INSERT signature. With the column now wired
unconditionally into the INSERT, every WithArgs that pinned
budget_limit as the 10th arg needed a 11th slot for the resolved
max_concurrent_tasks value.
Files:
- workspace_test.go: 6 tests (DBInsertError, DefaultsApplied,
WithSecrets_Persists, TemplateDefaultsMissingRuntimeAndModel,
TemplateDefaultsLegacyTopLevelModel, CallerModelOverridesTemplateDefault)
- workspace_budget_test.go: 1 test (Budget_Create_WithLimit)
All resolved values are the schema-default mirror, so the test
expectation reads as the same models.DefaultMaxConcurrentTasks
const that the handler writes. New imports added to both files.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Simplify pass on top of the wire-up commit:
- New const models.DefaultMaxConcurrentTasks = 1; handlers and tests
reference the symbol so the schema-default mirror lives in one place.
- Strip 5 multi-line comments that narrated what the code does.
- Drop the duplicate field-rationale on OrgWorkspace; the one on
CreateWorkspacePayload is canonical.
- Drop test-side positional comments that would silently lie if columns
get reordered.
Pure cleanup; no behaviour change.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase 4 of #1408 (active_tasks counter). Runtime increment/decrement,
schema column (037), and scheduler enforcement (scheduler.go:312)
already shipped — but the write path from template config.yaml +
direct API was missing, so every workspace silently fell through to
the schema default of 1. Leaders that set max_concurrent_tasks: 3 in
their org template were getting 1 anyway, defeating the entire
feature for the use case it was built for (cron-vs-A2A contention on
PM/lead workspaces).
- OrgWorkspace gains MaxConcurrentTasks (yaml + json tags)
- CreateWorkspacePayload gains MaxConcurrentTasks (json tag)
- Both INSERTs now write the column unconditionally; 0/omitted
payload value falls back to 1 (schema default mirror) so the wire
stays single-shape — no forked column list / goto.
- Existing Create-handler test mocks updated to expect the 11th arg.
- New TestWorkspaceCreate_MaxConcurrentTasksOverride locks the
payload→DB propagation for the leader case (value=3).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Three files conflicted with staging changes that landed while this PR
sat open. Resolved each by combining both intents (not picking one side):
- a2a_proxy.go: keep the branch's idle-timeout signature
(workspaceID parameter + comment) AND apply staging's #1483 SSRF
defense-in-depth check at the top of dispatchA2A. Type-assert
h.broadcaster (now an EventEmitter interface per staging) back to
*Broadcaster for applyIdleTimeout's SubscribeSSE call; falls through
to no-op when the assertion fails (test-mock case).
- a2a_proxy_test.go: keep both new test suites — branch's
TestApplyIdleTimeout_* (3 cases for the idle-timeout helper) AND
staging's TestDispatchA2A_RejectsUnsafeURL (#1483 regression). Updated
the staging test's dispatchA2A call to pass the workspaceID arg
introduced by the branch's signature change.
- workspace_crud.go: combine both Delete-cleanup intents:
* Branch's cleanupCtx detachment (WithoutCancel + 30s) so canvas
hang-up doesn't cancel mid-Docker-call (the container-leak fix)
* Branch's stopAndRemove helper that skips RemoveVolume when Stop
fails (orphan sweeper handles)
* Staging's #1843 stopErrs aggregation so Stop failures bubble up
as 500 to the client (the EC2 orphan-instance prevention)
Both concerns satisfied: cleanup runs to completion past canvas
hangup AND failed Stop calls surface to caller.
Build clean, all platform tests pass.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
PR #2103 widened the SSRF saasMode branch to also relax RFC-1918 + ULA
under MOLECULE_ENV=development (so the docker-compose dev pattern stops
rejecting workspace registrations on 172.18.x.x bridge IPs). The
existing TestIsSafeURL_DevMode_StillBlocksOtherRanges covered the
security floor (metadata / TEST-NET / CGNAT stay blocked), but no
test asserted the positive side — that 10.x / 172.x / 192.168.x / fd00::
ARE now allowed under dev mode.
Without this test, a future refactor that quietly drops the
`|| devModeAllowsLoopback()` from isPrivateOrMetadataIP wouldn't trip
any assertion, and the docker-compose dev loop would silently re-break.
Adds TestIsSafeURL_DevMode_AllowsRFC1918 — table of 4 URLs covering
the three RFC-1918 IPv4 ranges + IPv6 ULA fd00::/8. Sets
MOLECULE_DEPLOY_MODE=self-hosted explicitly so the test exercises the
devMode branch, not a SaaS-mode pass.
Closes the Optional finding I left on PR #2103.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The production-side end of the runtime CD chain. Operators (or the post-
publish CI workflow) hit this after a runtime release to pull the latest
workspace-template-* images from GHCR and recreate any running ws-* containers
so they adopt the new image. Without this, freshly-published runtime sat in
the registry but containers kept the old image until naturally cycled.
Implementation notes:
- Uses Docker SDK ImagePull rather than shelling out to docker CLI — the
alpine platform container has no docker CLI installed.
- ghcrAuthHeader() reads GHCR_USER + GHCR_TOKEN env, builds the base64-
encoded JSON payload Docker engine expects in PullOptions.RegistryAuth.
Both empty → public/cached images only; both set → private GHCR pulls.
- Container matching uses ContainerInspect (NOT ContainerList) because
ContainerList returns the resolved digest in .Image, not the human tag.
Inspect surfaces .Config.Image which is what we need.
- Provisioner.DefaultImagePlatform() exported so admin handler picks the
same Apple-Silicon-needs-amd64 platform as the provisioner — single
source of truth for the multi-arch override.
Local-dev companion: scripts/refresh-workspace-images.sh runs on the
host and inherits the host's docker keychain auth — alternate path for
when GHCR_USER/TOKEN aren't set in the platform env.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Pre-fix, POST /workspaces/:id/notify (the side-channel agents use to push
interim updates and follow-up results) only broadcast via WebSocket — no
DB write. When the user refreshed the page, the chat-history loader
(which queries activity_logs) couldn't restore those messages and they
vanished from the chat.
Hits the most common path: when the platform's POST /a2a times out (idle),
the runtime keeps working and eventually pushes its reply via
send_message_to_user. The reply rendered live but disappeared on reload.
Fix: also INSERT an activity_logs row with shape the existing loader
already understands (type=a2a_receive, source_id=NULL, response_body=
{result: text}). Persistence is best-effort — a DB hiccup doesn't block
the WebSocket push (which the user is already seeing).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
The docker-compose dev pattern puts platform and workspace containers on
the same docker bridge network (172.18.0.0/16, RFC-1918). The runtime
registers via its docker-internal hostname which DNS-resolves to a
172.18.x.x IP. The SSRF defence's isPrivateOrMetadataIP rejected those,
so every workspace POST through the platform proxy returned
'workspace URL is not publicly routable' — breaking the entire docker-
compose dev loop.
Fix: in isPrivateOrMetadataIP, treat MOLECULE_ENV=development the same
as SaaS mode for RFC-1918 relaxation. Both share the 'trusted intra-
network routing' property — SaaS is sibling EC2s in the same VPC, dev
is sibling containers on the same docker bridge. Always-blocked
categories (metadata link-local, TEST-NET, CGNAT) stay blocked.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
When proxyA2A returns 202+{queued:true} (target busy → enqueued for drain
on next heartbeat), executeDelegation previously treated it as a successful
completion and ran extractResponseText on the queued JSON. The result was
'Delegation completed (workspace agent busy — request queued, will dispatch...)'
landing in activity_logs.summary, which the LLM then echoed to the user
chat as garbage.
Two fixes:
1. delegation.go: detect queued shape via new isQueuedProxyResponse helper,
write status='queued' with clean summary 'Delegation queued — target at
capacity', store delegation_id in response_body so the drain can stitch
back later. Also embed delegation_id in params.message.metadata + use it
as messageId so the proxy's idempotency-key path keys off the same id.
2. a2a_queue.go: when DrainQueueForWorkspace successfully drains a queued
item, extract delegation_id from the body's metadata and UPDATE the
originating delegate_result row (queued → completed with real
response_body). Broadcast DELEGATION_COMPLETE so the canvas chat feed
flips the queued line to completed in real time.
Closes the loop so check_task_status reflects ground truth instead of
perpetual 'queued' even after the queued request eventually drained.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
The 3 skipped tests in workspace_provision_test.go (#1206 regression
tests) were blocked because captureBroadcaster's struct-embed wouldn't
type-check against WorkspaceHandler.broadcaster's concrete
*events.Broadcaster field. This PR fixes the interface blocker for
the 2 broadcaster-related tests; the 3rd (plugins.Registry resolver)
is a separate blocker tracked elsewhere.
Changes:
- internal/events/broadcaster.go: define `EventEmitter` interface with
RecordAndBroadcast + BroadcastOnly. *Broadcaster satisfies it via
its existing methods (compile-time assertion guards future drift).
SubscribeSSE / Subscribe stay off the interface because only sse.go
+ cmd/server/main.go call them, and both still hold the concrete
*Broadcaster.
- internal/handlers/workspace.go: WorkspaceHandler.broadcaster type
changes from *events.Broadcaster to events.EventEmitter.
NewWorkspaceHandler signature updated to match. Production callers
unchanged — they pass *events.Broadcaster, which the interface
accepts.
- internal/handlers/activity.go: LogActivity takes events.EventEmitter
for the same reason — tests passing a stub no longer need to
construct the full broadcaster.
- internal/handlers/workspace_provision_test.go: captureBroadcaster
drops the struct embed (no more zero-value Broadcaster underlying
the SSE+hub fields), implements RecordAndBroadcast directly, and
adds a no-op BroadcastOnly to satisfy the interface. Skip messages
on the 2 empty broadcaster-blocked tests updated to reflect the
new "interface unblocked, test body still needed" state.
Verified `go build ./...`, `go test ./internal/handlers/`, and
`go vet ./...` all clean.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
#1483 flagged that dispatchA2A() doesn't call isSafeURL internally —
the guard exists only at the caller level (resolveAgentURL at
a2a_proxy.go:424). The primary call path through proxyA2ARequest
is safe today, but if any future code path ever calls dispatchA2A
directly without going through resolveAgentURL, the SSRF check
would be silently bypassed.
This adds the one-line defense-in-depth guard the issue prescribed:
if err := isSafeURL(agentURL); err != nil {
return nil, nil, &proxyDispatchBuildError{err: err}
}
Wrapping as *proxyDispatchBuildError preserves the existing caller
error-classification path — the same shape that maps to 500 elsewhere.
Adds TestDispatchA2A_RejectsUnsafeURL pinning the contract:
re-enables SSRF for the test (setupTestDB disables it for normal
unit tests), passes a metadata IP, asserts the build error returns
and cancel is nil so no resource is leaked.
The 4 existing dispatchA2A unit tests use setupTestDB → SSRF
disabled, so they continue passing unchanged.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
#1484 flagged that discoverHostPeer() and writeExternalWorkspaceURL()
return URLs sourced from the workspaces table without an isSafeURL
check. Workspace runtimes register their own URLs via /registry/register
— a misbehaving / compromised runtime could register a metadata-IP URL.
Today both functions are gated by Phase 30.6 bearer-required Discover,
so exposure is theoretical. The fix makes them safe regardless of
upstream auth shape.
Changes:
- discoverHostPeer: isSafeURL on resolved URL before responding;
503 + log on rejection.
- writeExternalWorkspaceURL: same guard applied to the post-rewrite
outURL (so a host.docker.internal rewrite is checked AND a
metadata-IP that survived the rewrite untouched is rejected).
- 3 new regression tests:
* RejectsMetadataIPURL on host-peer path (169.254.169.254 → 503)
* AcceptsPublicURL on host-peer path (8.8.8.8 → 200; positive
counterpart so the rejection test can't pass via universal-fail)
* RejectsMetadataIPURL on external-workspace path
setupTestDB already disables SSRF checks via setSSRFCheckForTest,
so the 16+ existing discovery tests remain untouched. Only the new
tests opt in to enabled SSRF.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- Extract walkTemplateConfigs(configsDir, fn) shared helper. Both
templates.List and loadRuntimeProvisionTimeouts walked configsDir
+ parsed config.yaml — same boilerplate twice. Now centralised so
a future template-discovery rule (subdir naming, README sentinel,
etc.) lands in one place.
- templates.List uses the walker — net -10 lines.
- loadRuntimeProvisionTimeouts uses the walker — net -10 lines.
- Document runtimeProvisionTimeoutsCache as 'NOT SAFE for
package-level reuse' so a future change doesn't accidentally
promote it to a singleton (sync.Once can't be reset → tests
would lock out other fixtures).
Skipped (review finding): atomic.Pointer[map[string]int] for
future hot-reload. The doc comment already documents the
limitation; YAGNI-promoting the primitive now would buy a
not-yet-built feature at the cost of more code today.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase 2 of #2054 — workspace-server reads runtime-level
provision_timeout_seconds from template config.yaml manifests and
includes provision_timeout_ms in the workspace List/Get response.
Phase 1 (canvas, #2092) already plumbs the field through socket →
node-data → ProvisioningTimeout's resolver, so the moment a
template declares the field the per-runtime banner threshold
adjusts without a canvas release.
Implementation:
- templates.go: parse runtime_config.provision_timeout_seconds in
the templateSummary marshaller. The /templates API now surfaces
the field too — useful for ops dashboards and future tooling.
- runtime_provision_timeouts.go (new): loadRuntimeProvisionTimeouts
scans configsDir, parses every immediate subdir's config.yaml,
returns runtime → seconds. Multiple templates with the same
runtime: max wins (so a slow template's threshold doesn't get
cut by a fast template's). Bad/empty inputs are silently
skipped — workspace-server starts cleanly with no templates.
- runtimeProvisionTimeoutsCache: sync.Once-backed lazy cache.
First workspace API request after process start pays the read
cost (~few KB across ~50 templates); every subsequent request is
a map lookup. Cache lifetime = process lifetime; invalidates on
workspace-server restart, which is the normal template-change
cadence.
- WorkspaceHandler gets a provisionTimeouts field (zero-value struct
is valid — the cache lazy-inits on first get()).
- addProvisionTimeoutMs decorates the response map with
provision_timeout_ms (seconds × 1000) when the runtime has a
declared timeout. Absent = no key in the response, canvas falls
through to its runtime-profile default. Wired into both List
(per-row decoration in the loop) and Get.
Tests (5 new in runtime_provision_timeouts_test.go):
- happy path: hermes declares 720, claude-code doesn't, only
hermes appears in the map
- max-on-duplicate: same runtime in two templates → max wins
- skip-bad-inputs: missing runtime, zero timeout, malformed yaml,
loose top-level files all silently ignored
- missing-dir: returns empty map, no crash
- cache: lazy-init on first get; subsequent gets hit cache even
after underlying file changes (sync.Once contract); unknown
runtime returns zero
Phase 3 (separate template-repo PR): template-hermes config.yaml
declares provision_timeout_seconds: 720 under runtime_config.
canvas RUNTIME_PROFILES.hermes becomes redundant + removable.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The existing tokens_test.go skips every test when db.DB is nil, so CI
ran with 0% coverage on tokens.go's List/Create/Revoke. This file adds
sqlmock-driven tests that exercise the SQL paths directly without
needing a live Postgres, lifting coverage on all 4 functions to 100%
and module-level handler coverage from 60.3% → 61.1%.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Both backends panicked when called on a zero-valued or nil receiver:
Provisioner.{Stop,IsRunning} dereferenced p.cli; CPProvisioner.{Stop,
IsRunning} dereferenced p.httpClient. The orphan sweeper and shutdown
paths can call these speculatively where the receiver isn't fully
wired — the panic crashed the goroutine instead of the caller seeing
a clean error.
Three changes:
1. Add ErrNoBackend (typed sentinel) and nil-guard the four methods.
- Provisioner.{Stop,IsRunning}: guard p == nil || p.cli == nil at
the top.
- CPProvisioner.Stop: guard p == nil up top, then httpClient nil
AFTER resolveInstanceID + empty-instance check (the empty
instance_id path doesn't need HTTP and stays a no-op success
even on zero-valued receivers — preserved historical contract
from TestIsRunning_EmptyInstanceIDReturnsFalse).
- CPProvisioner.IsRunning: same shape — empty instance_id stays
(false, nil); httpClient-nil with non-empty instance_id returns
ErrNoBackend.
2. Flip the t.Skip on TestDockerBackend_Contract +
TestCPProvisionerBackend_Contract — both contract tests run now
that the panics are gone. Skipped scenarios were the regression
guard for this fix.
3. Add TestZeroValuedBackends_NoPanic — explicit assertion that
zero-valued and nil receivers return cleanly (no panic). Docker
backend always returns ErrNoBackend on zero-valued; CPProvisioner
may return (false, nil) when the DB-lookup layer absorbs the case
(no instance to query → no HTTP needed). Both are acceptable per
the issue's contract — the gate is no-panic.
Tests:
- 6 sub-cases across the new TestZeroValuedBackends_NoPanic
- TestDockerBackend_Contract + TestCPProvisionerBackend_Contract
now run their 2 scenarios (4 sub-cases each)
- All existing provisioner tests still green
- go build ./... + go vet ./... + go test ./... clean
Closes drift-risk #6 in docs/architecture/backends.md.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Pre-fix: workspace-server's provision-timeout sweep was hardcoded
at 10 min for all runtimes. The CP-side bootstrap-watcher (cp#245)
correctly gives hermes 25 min for cold-boot (hermes installs
include apt + uv + Python venv + Node + hermes-agent — 13–25 min on
slow apt mirrors is normal). The two timeout systems disagreed:
the watcher would happily wait 25 min, but the workspace-server's
10-min sweep killed healthy hermes boots mid-install at 10 min and
marked them failed.
Today's example: #2061's E2E run on 2026-04-26 at 08:06:34Z
created a hermes workspace, EC2 cloud-init was visibly making
progress on apt-installs (libcjson1, libmbedcrypto7t64) when the
sweep flipped status to 'failed' at 08:17:00Z (10:26 elapsed). The
test threw "Workspace failed: " (empty error from sql.NullString
serialization) and CI failed on a healthy boot.
Fix: provisioningTimeoutFor(runtime) — same shape as the CP's
bootstrapTimeoutFn:
- hermes: 30 min (watcher's 25 min + 5 min slack)
- others: 10 min (unchanged — claude-code/langgraph/etc. boot
in <5 min, 10 min is plenty)
PROVISION_TIMEOUT_SECONDS env override still works (applies to all
runtimes — operators who care about the runtime distinction
shouldn't use the override anyway).
Sweep query change: pulls (id, runtime, age_sec) per row instead
of pre-filtering by age in SQL. Per-row Go evaluation picks the
correct timeout. Slightly more rows scanned but bounded by the
status='provisioning' partial index — workspaces in flight, not
historical.
Tests:
- TestProvisioningTimeout_RuntimeAware — locks in the per-runtime
mapping
- TestSweepStuckProvisioning_HermesGets30MinSlack — hermes at
11 min must NOT be flipped
- TestSweepStuckProvisioning_HermesPastDeadline — hermes at
31 min IS flipped, payload includes runtime
- Existing tests updated for the new query shape
Verified:
- go build ./... clean
- go vet ./... clean
- go test ./... all green
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
\`Delete\`'s call to \`h.provisioner.Stop()\` was silently swallowing
errors — and on the SaaS/EC2 backend, Stop() is the call that
terminates the EC2 via the control plane. When Stop returned an
error (CP transient 5xx, network blip), the workspace was marked
'removed' in the DB but the EC2 stayed running with no row to
track it. The "14 orphan workspace EC2s on a 0-customer account"
incident in #1843 (40 vCPU on a 64 vCPU AWS limit) traced to this
silent-leak path.
This change aggregates Stop errors across both descendant and
self-stop calls and surfaces them as 500 to the client, matching
the loud-fail pattern from CP #262 (DeprovisionInstance) and the
DNS cleanup propagation (#269).
Idempotency:
- The DB row is already 'removed' before Stop runs (intentional,
per #73 — guards against register/heartbeat resurrection).
- \`resolveInstanceID\` reads instance_id without a status filter,
so a retry can replay Stop with the same instance_id.
- CP's TerminateInstance is idempotent on already-terminated EC2s.
- So a retry-after-500 either re-attempts the terminate (succeeds)
or finds the instance already gone (also succeeds).
Behaviour change at the API layer:
- Before: 200 \`{"status":"removed","cascade_deleted":N}\` regardless
of Stop outcome.
- After: 500 \`{"error":"...","removed_count":N,"stop_failures":K}\`
on Stop failure; 200 on success.
RemoveVolume errors stay log-and-continue — those are local
/var/data cleanup, not infra-leak class.
Test debt acknowledged: the WorkspaceHandler's \`provisioner\` field
is the concrete \`*provisioner.Provisioner\` type, not an interface.
Adding a regression test for the new error-propagation path
requires either a refactor (introduce a Provisioner interface) or
a docker-backed integration test. Filing the refactor as a
follow-up; the change here is small and mirrors a proven pattern
(CP #262 + #269 both ship without exhaustive new test coverage
for the same reason).
Verified:
- go build ./... clean
- go vet ./... clean
- go test ./... green across the whole module (existing TestDelete
cases unchanged behaviour for happy path)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Code-quality + efficiency review of PR #2081:
- Drop comma-ok on map type-asserts in filterPeersByQuery —
queryPeerMaps writes name/role unconditionally as string, so the
silent-empty-string fallback was cargo-culted defense that would
HIDE a real upstream shape change in tests rather than surface it.
Plain p["name"].(string) panics on violation, caught by tests.
- Trim filterPeersByQuery doc from 5 lines to 1 — function is 15
lines and self-evident.
- Refactor 6 separate Test functions into one table-driven
TestPeers_QFilter with 6 sub-tests. Net ~80 lines saved + naming
becomes readable subtest names instead of TestPeers_Q_Foo_Bar.
- Set-based peer-id comparison (peerIDSet) replaces fragile
peers[0]["id"] == "ws-alpha" asserts that would silently mask a
future sort/order regression on the production code.
- Fix the broken TestPeers_Q_NoMatches assertion: re-encoding an
unmarshalled []map collapses both null and [] to [], so the
previous json.Marshal(peers) == "[]" check was tautological. Move
the [] vs null distinction to a dedicated test
(TestPeers_Q_NoMatches_RawBodyIsArrayNotNull) that inspects the
recorder body BEFORE unmarshal.
runPeersWithQuery now returns both parsed peers and raw body so the
nil-guard test can use the bytes directly.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The Peers handler at workspace-server/internal/handlers/discovery.go
ignored the ?q= query param entirely — every caller got the full peer
list regardless of what they searched for. The handler exposes peer
identities + URLs, so leaking the unfiltered set on a "filtered"
endpoint is an info-disclosure bug (CWE-862).
Fix: read c.Query("q") and post-filter the in-memory peers slice by
case-insensitive substring match against name OR role. Filtering is
done in Go after the existing 3 SQL reads — keeps the SQL bytes
identical to the no-filter path (no injection vector, no DB-driver
collation surprises) at a small cost. The peer set is bounded by a
single workspace's parent + children + siblings (typically <50
rows), so the in-memory pass is negligible.
Empty / whitespace-only q is a no-op — preserves the no-filter
allocation profile.
Tests (6 new in discovery_test.go):
- TestPeers_NoQ_ReturnsAll — regression baseline (3 peers, no filter)
- TestPeers_Q_FiltersByName — q=alpha → ws-alpha only
- TestPeers_Q_CaseInsensitive — q=ALPHA → ws-alpha (locks in ToLower)
- TestPeers_Q_FiltersByRole — q=design → ws-beta (role-side match)
- TestPeers_Q_NoMatches — empty array, JSON [] not null
- TestPeers_Q_WhitespaceOnly — q=' ' treated as no-filter
Helpers peersFilterFixture + runPeersWithQuery + peerNames keep each
test scoped to the q-behaviour, not re-declaring SQL expectations.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
User reported the canvas threw a generic "API GET /workspaces: 500
{auth check failed}" error when local Postgres + Redis were both
down. Two problems:
1. The error code (500) and message ("auth check failed") said
nothing useful. The actual condition was "platform can't reach
its datastore to validate your token" — a Service Unavailable
class, not Internal Server Error.
2. The canvas had no way to distinguish infra-down from a real
auth bug, so it rendered the raw API string in the same
generic-error overlay it uses for everything.
Fix in two layers:
Server (wsauth_middleware.go):
- New abortAuthLookupError helper centralises all three sites
that previously returned `500 {"error":"auth check failed"}`
when HasAnyLiveTokenGlobal or orgtoken.Validate hit a DB error.
- Now returns 503 + structured body
`{"error": "...", "code": "platform_unavailable"}`. 503 is
the correct semantic ("retry shortly, infra is unavailable")
and the code field is the contract the canvas reads.
- Body deliberately excludes the underlying DB error string —
production hostnames / connection-string fragments must not
leak into a user-visible error toast.
Canvas (api.ts):
- New PlatformUnavailableError class. api.ts inspects 503
responses for the platform_unavailable code and throws the
typed error instead of the generic "API GET /…: 503 …"
message. Generic 503s (upstream-busy, etc.) keep the legacy
path so existing busy-retry UX isn't disrupted.
Canvas (page.tsx):
- New PlatformDownDiagnostic component renders when the
initial hydration catches PlatformUnavailableError.
Surfaces the actual condition with operator-actionable
copy ("brew services start postgresql@14 / redis") +
pointer to the platform log + a Reload button.
Tests:
- Go: TestAdminAuth_DatastoreError_Returns503PlatformUnavailable
pins the response shape (status, code field, no DB-error leak)
- Canvas: 5 tests for PlatformUnavailableError classification —
typed throw on 503+code match, generic-Error fallback for
503-without-code (upstream busy), 500 stays generic, non-JSON
body falls back to generic.
1015 canvas tests + full Go middleware suite pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Convergence-pass review noted the comment at orphan_sweeper.go:171
still describes the pre-cb126014 contract ("Stop returns nil even
when container is gone, but a future change could surface real
errors"). The future is now — Stop does surface real errors today.
Tightened the comment to match the live contract:
isContainerNotFound is treated as success, anything else returns
the wrapped Docker error, sweeper retries on the next cycle.
Pure comment change, no behavior diff.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Review caught a critical issue with 12c49183: the headline "skip
RemoveVolume when Stop fails" guarantee was dead code. `Provisioner.Stop`
unconditionally `return nil`'d after logging the underlying
ContainerRemove error, so the new `if err := h.provisioner.Stop(...);
err != nil { skip volume }` guard in workspace_crud.go AND the same
guard in the orphan sweeper could never fire. RemoveVolume always
ran, predictably failing with "volume in use" when Stop hadn't
actually killed the container — which is the exact production bug
the commit claimed to fix.
Now Stop:
- returns nil on successful remove (no change)
- returns nil when the container is already gone (uses the existing
isContainerNotFound helper — that's the cleanup post-condition,
not a failure)
- returns the wrapped Docker error otherwise (daemon timeout, ctx
cancellation, socket EOF — anything that means the container
might still be alive)
Audited every Provisioner.Stop caller in the tree (team.go,
workspace_restart.go ×4, workspace.go) — all of them already
discard the return value, so the widened error surface is purely
opt-in for the new cleanup paths and breaks no existing behaviour.
Other review-driven fixes in this commit:
- workspace_crud.go: detached `broadcaster.RecordAndBroadcast` from
the request ctx too. RecordAndBroadcast does INSERT INTO
structure_events + Redis Publish; if the canvas hangs up, a
request-ctx-bound INSERT can be cancelled mid-write and the
WORKSPACE_REMOVED event never lands, leaving other WS clients
ignorant of the cascade.
- orphan_sweeper.go: added isLikelyWorkspaceID guard before turning
Docker container prefixes into SQL LIKE patterns. The Docker
name filter is a SUBSTRING match (not prefix), so non-workspace
containers like `my-ws-tool` slip through; the in-loop HasPrefix
in provisioner trims most, but the in-sweeper alphabet check
(hex + dashes only) is the second line of defence and also
blocks SQL LIKE wildcards (`_`, `%`) from reaching the query.
Two new tests pin this — TestSweepOnce_FiltersNonWorkspacePrefixes
and TestIsLikelyWorkspaceID with 10 alphabet cases.
- provisioner.go: comment added to ListWorkspaceContainerIDPrefixes
flagging the substring/HasPrefix relationship as load-bearing.
Verified: full Go test suite passes; all 8 sweeper tests pass
(2 new for the LIKE-pattern guard); existing dispatch / delete /
provisioner tests unchanged.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Symptom: deleting workspaces from the canvas marked DB rows
status='removed' but left Docker containers running indefinitely.
After a session of org imports + cancellations, we counted 10
running ws-* containers all backed by 'removed' DB rows, eating
~1100% CPU on the Docker VM.
Two compounding bugs in handlers/workspace_crud.go's delete cascade:
1. The cleanup loop used `c.Request.Context()` for the Docker
stop/remove calls. When the canvas's `api.del` resolved on the
platform's 200, gin cancelled the request ctx — and any in-flight
Docker call cancelled with `context canceled`, leaving the
container alive. Old logs:
"Delete descendant <id> volume removal warning:
... context canceled"
2. `provisioner.Stop`'s error return was discarded and `RemoveVolume`
ran unconditionally afterward. When Stop didn't actually kill the
container (transient daemon error, ctx cancellation as in #1), the
volume removal would predictably fail with "volume in use" and
the container kept running with the volume mounted. Old logs:
"Delete descendant <id> volume removal warning:
Error response from daemon: remove ... volume is in use"
Fix layered in two parts:
- workspace_crud.go: detach cleanup with `context.WithoutCancel(ctx)`
+ a 30s bounded timeout. Stop's error is now checked and on
failure we skip RemoveVolume entirely (the orphan sweeper below
catches what we deferred).
- New registry/orphan_sweeper.go: periodic reconcile pass (every 60s,
initial run on boot). Lists running ws-* containers via Docker name
filter, intersects with DB rows where status='removed', stops +
removes volumes for the leaks. Defence in depth — even a brand-new
Stop failure mode heals on the next sweep instead of leaking
forever.
Provisioner gains a tiny ListWorkspaceContainerIDPrefixes helper
that wraps ContainerList with the `name=ws-` filter; the sweeper
takes an OrphanReaper interface (matches the ContainerChecker
pattern in healthsweep.go) so unit tests don't need a real Docker
daemon.
main.go wires the sweeper alongside the existing liveness +
health-sweep + provisioning-timeout monitors, all under
supervised.RunWithRecover so a panic restarts the goroutine.
6 new sweeper tests cover the reconcile path, the
no-running-containers short-circuit, the daemon-error skip, the
Stop-failure-leaves-volume invariant (the same trap that motivated
this fix), the volume-remove-error-is-non-fatal continuation,
and the nil-reaper no-op.
Verified: full Go test suite passes; manually purged the 10 leaked
containers + their orphan volumes from the dev host with `docker
rm -f` + `docker volume rm` (one-off cleanup; the sweeper would
have caught them on the next cycle once deployed).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Round-2 review of the wedge/idle/progress bundle came back Approve
with 4 optional polish items. All taken:
1. Migration 043 down file gained `SET LOCAL lock_timeout = '5s'`
matching the up file. A rollback under the same load that
motivated the up-file guard would otherwise stall writers.
2. _clear_sdk_wedge_on_success now gates on actual stream content
(result_text or assistant_chunks). A degenerate "iterator
returned without raising but emitted nothing" case (possible
from a partial stream or stub SDK) no longer falsely advertises
recovery — only a real successful query (≥1 ResultMessage or
AssistantMessage TextBlock) clears the wedge.
3. isUpstreamBusyError dropped the redundant
`strings.Contains(msg, "context deadline exceeded")` fallback.
*url.Error.Unwrap propagates the typed sentinel since Go 1.13;
errors.Is(err, context.DeadlineExceeded) catches the real
net/http shape. The substring was a foot-gun (would also match
user-content with that phrase). Test fixture updated to use
`fmt.Errorf("Post: %w", context.DeadlineExceeded)` which
reflects what net/http actually returns.
4. TestIsUpstreamBusyError added a context.Canceled case (both
typed and wrapped via %w) — pins the new applyIdleTimeout
classification.
No critical/required findings on second pass; reviewer verdict was
Approve. Items above are polish for symmetry and test clarity.
1010 canvas + 64 Python + full Go suites pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Bundle review of pieces 1/2/3 surfaced two critical issues plus a
handful of required + optional fixes. All addressed.
Critical:
1. Migration 043 was missing 'paused' and 'hibernated' from the
workspace_status enum. Both are real production statuses written
by workspace_restart.go (lines 283 and 406), introduced by
migration 029_workspace_hibernation. The original `USING
status::workspace_status` cast would have errored mid-transaction
on any production DB containing those values. Added both. Also
added `SET LOCAL lock_timeout = '5s'` so the migration aborts
instead of stalling the workspace fleet behind a slow SELECT.
2. The chat activity-feed window kept only 8 lines, and a single
multi-tool turn (Read 5 files + Grep + Bash + Edit + delegate)
easily flushed older context before the user could read it.
Extracted appendActivityLine to chat/activityLog.ts with a
20-line window AND consecutive-duplicate collapse (same tool
on the same target twice in a row is noise, not new progress).
5 unit tests pin the behavior.
Required:
3. The SDK wedge flag was sticky-only — a single transient
Control-request-timeout from a flaky network blip locked the
workspace into degraded for the whole process lifetime, even
when the next query() would have succeeded. Added
_clear_sdk_wedge_on_success(), called from _run_query's success
path. The next heartbeat after a working query reports
runtime_state empty and the platform recovers the workspace to
online without a manual restart. New regression test.
4. _report_tool_use now sets target_id = WORKSPACE_ID for self-
actions, matching the convention other self-logged activity
rows use. DB consumers joining on target_id see a well-defined
value instead of NULL.
Optional taken:
5. Tightened _WEDGE_ERROR_PATTERNS from "control request timeout"
to "control request timeout: initialize" — suffix-anchored so a
future SDK error on an in-flight tool-call control message
doesn't get misclassified as the unrecoverable post-init wedge.
6. Dropped the redundant "context canceled" substring fallback in
isUpstreamBusyError. errors.Is(err, context.Canceled) is the
typed check; the substring would also match healthy client-side
aborts, which we don't want classified as upstream-busy.
Verified: 1010 canvas tests + 64 Python tests + full Go suite pass;
migration applies cleanly on dev DB with all 8 enum values; reverse
migration restores TEXT.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The previous canvas-default 5-min absolute deadline pre-empted any
chat that legitimately ran longer (multi-turn tool use, large
synthesis tasks) and made every wedged-SDK call burn 5 full minutes
before the user saw anything. Replaced with a per-dispatch idle
timeout: cancel the request only when the broadcaster has been
silent for `idleTimeoutDuration` (60s). Any progress event for the
workspace — agent_log tool-use rows, task_update, a2a_send,
a2a_receive — resets the clock.
Mechanics:
- new applyIdleTimeout helper subscribes to events.Broadcaster's
per-workspace SSE channel, drains its messages, resets a
time.Timer on each one, cancels the wrapped ctx when the timer
fires. Cleanup goroutine + subscription lives only as long as
the returned cancel func is uncalled.
- dispatchA2A now takes workspaceID as a parameter, applies the
idle timeout always (canvas + agent), and combines its cancel
with the existing 30-min agent-to-agent ceiling cancel into one
func the caller defers.
- Canvas dispatches no longer have an absolute ceiling at all —
the idle timer is the only "give up" signal. A healthy chat
reporting tool-use telemetry every few seconds runs forever;
a wedged runtime fails in 60s instead of 5 min.
- isUpstreamBusyError now also recognises context.Canceled (the
error class our idle cancel produces, distinct from
DeadlineExceeded). Same 503-busy retry semantics.
Tests:
- TestApplyIdleTimeout_FiresOnSilence — 60ms idle, no events,
ctx cancels with context.Canceled.
- TestApplyIdleTimeout_ResetsOnEvent — event mid-window extends
the deadline; ctx alive past original deadline, then cancels
on the second silence window.
- TestApplyIdleTimeout_NilBroadcasterDegradesGracefully — defensive
no-op for paths that don't wire a broadcaster.
- 3 existing dispatchA2A tests updated for the new workspaceID
param + the always-non-nil cancel return shape.
This pairs with Piece 1's per-tool-use telemetry (166c7f77): the
broadcaster events that reset the idle timer ARE the agent_log
rows the workspace started emitting per tool call. So the same
event stream feeds both the chat progress feed AND the proxy's
deadline.
Full Go test suite passes.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Heartbeat lies. The asyncio task that POSTs /registry/heartbeat lives
in its own process slot, so a workspace whose claude_agent_sdk has
wedged on `Control request timeout: initialize` keeps reporting
"online" — every chat send hangs the full 5-min platform deadline
even though the runtime is dead in the water. This commit teaches
the workspace to admit it's wedged and the platform to honor that
admission by flipping status → degraded.
Five layers, all in one commit because they share a contract:
1. Migration 043 — convert workspaces.status from free-form TEXT to
a real `workspace_status` Postgres ENUM with the 6 values
production code actually writes (provisioning, online, offline,
degraded, failed, removed). Locks the value set; future typo
writes error at the DB instead of silently storing rogue strings.
Down migration reverts to TEXT and drops the type.
2. workspace-server/internal/models — `HeartbeatPayload` gains a
`runtime_state string` field. Empty = healthy. Currently the only
non-empty value the handler honors is "wedged"; future symptoms
can extend without another migration.
3. workspace-server/internal/handlers/registry.go — `evaluateStatus`
gains a wedge branch BEFORE the existing error_rate >= 0.5 path:
if `RuntimeState=="wedged"` and currently online, flip to
degraded and broadcast WORKSPACE_DEGRADED with the wedge sample
error. Recovery (`degraded → online`) now requires BOTH
error_rate < 0.1 AND runtime_state cleared, so a workspace still
reporting wedged stays degraded even when its error count
happens to be 0 (the wedge captures a runtime state, not an
error count).
4. workspace/claude_sdk_executor.py — module-level `_sdk_wedged_reason`
flag set when execute()'s catch block sees an error matching
`_WEDGE_ERROR_PATTERNS` (currently just "control request
timeout"). Sticky for the process lifetime; the SDK's internal
client-process state is corrupted on this error and only a
workspace restart (= new Python process = fresh module state)
clears it. Helpers `is_wedged()` / `wedge_reason()` /
`_reset_sdk_wedge_for_test()` exposed.
5. workspace/heartbeat.py — heartbeat body now layers on
`_runtime_state_payload()` for both the happy path and the
401-retry path. Lazy-imports claude_sdk_executor so non-Claude
runtimes (where the module may not even be importable) keep
working unchanged.
Canvas required no changes — `STATUS_CONFIG.degraded` was already
defined in design-tokens.ts (amber dot, "Degraded" label) and
WorkspaceNode.tsx already renders `lastSampleError` underneath the
status pill when status === "degraded". The existing wiring just
never fired because nothing was writing degraded in this code path.
Tests:
- 3 Go handler tests for the new transitions (online → degraded on
wedged, degraded stays put while still wedged, degraded → online
after wedge clears)
- 5 Python wedge-detector tests (default clean, mark sets flag,
sticky-first-wins, execute() flips on Control request timeout,
execute() does NOT flip on unrelated errors)
- Migration smoke-tested against the local dev DB (3 existing rows,
all enum-compatible; migration applied cleanly, post-state has
the column as workspace_status type and the index preserved)
Verified: 79 Python tests pass; full Go test suite passes; migration
applies clean on a real DB; reverse migration restores the column to
TEXT.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Independent code review surfaced two required documentation fixes and
one growth-correctness gap. All addressed here.
Auto-fit gate (useCanvasViewport):
The previous "subtree-grew-by-count" check missed the delete-then-add
case: subtree of 6 → delete one → 5 → a different child arrives → 6
again. A length-only comparison reads no growth and the fit is
skipped, leaving the new node off-screen. Switched to an id-set
membership snapshot so any brand-new id forces the fit even when the
count is unchanged.
The gate logic is now extracted as a pure exported function
`shouldFitGrowing(currentIds, prevIds, userPannedAt, lastAutoFitAt)`
so the regression-prone decision can be unit-tested in isolation
without standing up React Flow + DOM event refs. 8 cases cover:
first-fit, empty-prior, brand-new id, status-update with user pan,
no-pan-ever, pan-before-last-fit, delete-then-add same length, and
shrink-only with user pan.
Parser parity (dotenv.go + next.config.ts):
Existing-env semantics were undocumented in both parsers. Both now
explicitly note that an explicitly-set empty string (`KEY=` from the
parent shell) counts as "set" — the file value does NOT backfill —
matching the Go (os.LookupEnv) and Node (`process.env[k] !==
undefined`) primitives.
`export ` prefix uses a literal space; `export\tFOO=bar` is
intentionally rejected. Added the same comment in both parsers
to lock in this parity invariant since the commit message claims
"if one parser changes, the other has to."
Skipped (per analysis):
- Drag-pan respect for left-click drag-pan during deploy. The
growth-check safety net means any pan gets overridden on the
next arrival anyway, which is the desired behavior for the
"watch the org deploy" use case. After deploy completes, no
more fit-deploying-org events fire so drag-pan works freely.
- Map cleanup for lastFitSubtreeIdsRef. Per-tab session, UUID
keys, tiny entries — not worth the cleanup hook.
993 canvas tests pass (8 new); Go dotenv tests pass; tsc clean.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The repo's own .env contains lines like
CONFIGS_DIR= # Path to workspace-configs-templates/...
where the value is empty + an inline comment. The pre-fix parser:
1. v = " # Path to ..."
2. TrimLeft → "# Path to ..."
3. Inline-comment loop looked for " #" or "\t#" — neither matches
because the leading whitespace is gone.
4. Returned the comment text as the value.
Result: os.Setenv("CONFIGS_DIR", "# Path to ...") clobbered the auto-
discovery fallback. The TemplatesHandler then opened the comment as
a directory, ReadDir errored silently, and GET /templates returned
[]. Canvas's Templates panel showed "No templates found in
workspace-configs-templates/" even though 8 valid templates existed
on disk.
Fix: strip leading whitespace from the value FIRST, then run a
position-aware comment scan that treats `#` as a comment marker iff
it's at the start of the (trimmed) value or preceded by whitespace.
A bare `#` mid-value (e.g. `KEY=token#fragment`) still survives.
Quoted-value handling moved above the comment scan so
`KEY="value # not"` keeps the `#` as part of the value — pulled the
quote-detection into the same TrimLeft-then-check shape as the bare
path. The unterminated-quote case still falls through to bare-value
handling.
Three regression tests added covering the exact .env line that
broke (`CONFIGS_DIR= # ...`), spaces-only with comment, and tab-
only with comment.
Verified end-to-end: GET /templates now returns all 8 templates.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Independent code review surfaced three required fixes and one cheap
optional one. All addressed here.
dotenv parser:
- `export FOO=bar` was parsed as key `"export FOO"` (with embedded
space) and silently os.Setenv'd, so a developer pasting from a
direnv `.envrc` would get junk vars. Now strips the prefix.
- Quoted values weren't unwrapped: `FOO="hello world"` produced value
`"hello world"` with literal quotes. Now strips one matched pair of
surrounding `"` or `'`. Inside a quoted value `#` is part of the
value, not a comment marker (matches godotenv convention).
- UTF-8 BOM at file start (Windows editors) would have produced a
first key like U+FEFF + "FOO". Now stripped via TrimPrefix.
dotenv loader:
- findDotEnv()'s upward walk would happily pick up `~/.env` or a
sibling-repo `.env` if the binary was run from `~/Documents/other-
project/`. Real foot-gun on shared dev boxes. Now gated on a
monorepo sentinel: the candidate directory must contain
`workspace-server/go.mod`. Falls through to "no .env found" (=
pre-fix behavior) when the sentinel is absent.
socket fallback poll:
- startFallbackPoll() previously fired only on onclose, so the very
first connect attempt — when onclose hasn't fired yet because we
never had a successful onopen — left the canvas with no HTTP poll
for the duration of the failing handshake (Chrome can hold a
SYN-SENT WebSocket open ~75s before giving up). Now also called at
the top of connect(); the timer-already-running guard makes it a
no-op when one cycle later onclose calls it again.
Test coverage added: export prefix, single+double quoted values, hash
inside quotes preserved, unterminated quote falls back to bare value,
CRLF stripping locked in, BOM stripping, and a sentinel-rejection
regression test that creates a temp .env with no workspace-server
sibling and asserts findDotEnv refuses to load it.
Verified: 985 canvas tests + 30 dotenv subtests + 4 dotenv integration
tests all pass; tsc clean; rebuilt platform from monorepo root with
stripped env still loads .env (49 vars) and /workspaces returns 200.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Local dev runs (`/tmp/molecule-server` after `go build`) used to 401 on
/workspaces the moment the DB had any workspace token in it: the binary
inherited a bare shell env with no MOLECULE_ENV, so AdminAuth's dev
fail-open branch (gated on MOLECULE_ENV=development) didn't fire.
The repo's .env already has MOLECULE_ENV=development plus DATABASE_URL,
REDIS_URL, ADMIN_TOKEN=, etc. Until now you had to `set -a && source
.env` in the launching shell — a paper cut, but worse, it's a paper
cut in EVERY automated dev workflow (IDE run configs, integration
test harnesses, the smoke-test loop in this branch's manual testing).
Fix: cmd/server now walks upward from CWD looking for a .env (capped
at 6 levels) and merges KEY=VALUE pairs into os.Environ before any
other code reads env. Already-set vars win over file values, so
docker run -e / CI exports / `KEY=val ./binary` still dominate — only
unset keys get filled in.
Why no godotenv dep: the format we use is plain KEY=VALUE with `#`
comments, no interpolation, no quoting (verified against the live
.env: 49 kv lines, zero references to ${...} or `export`). A 30-line
parser is auditable and avoids supply-chain surface.
Why it's safe in production: Dockerfile doesn't COPY .env into the
image and .env is gitignored, so prod containers have no .env on
disk to load — the function's findDotEnv() loop finds nothing and
returns silently. If an operator deliberately drops one in, the
existing-env-wins rule means container-injected env still dominates.
Verified by booting `env -i HOME=$HOME PATH=$PATH /tmp/molecule-server`
from the repo root with a stripped env: log shows
".env: /Users/.../molecule-core/.env — loaded 49, 0 already set" and
/workspaces returns 200 instead of 401.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Introduces the UX A/B Lab org template — a 7-agent cell for rapid
landing-page variant generation. The template is also the first
consumer of the new any_of env schema (ANTHROPIC_API_KEY OR
CLAUDE_CODE_OAUTH_TOKEN), so it doubles as an end-to-end fixture
for that feature.
Canvas tree (all claude-code / sonnet):
Design Director
├── UX Researcher
├── Visual Designer
├── React Engineer
├── Deploy Engineer
├── A11y + SEO Auditor ← WCAG AA + canonical/noindex gate
└── Perf Auditor ← Core Web Vitals gate
Template files live in their own standalone repo
(Molecule-AI/molecule-ai-org-template-ux-ab-lab, to be published);
this change adds the manifest.json entry so fresh clones + CI
populate the template via scripts/clone-manifest.sh.
Tests:
- TestOrgTemplate_ClaudeAnyOfAuthPreflight — parses the exact
required_env / recommended_env shape the template ships with
via inline YAML (not on-disk, since org-templates/ is
gitignored in this monorepo) and verifies either member
alternative satisfies the preflight.
SEO safety built into the auditor's system prompt:
- One canonical variant; all others canonicalise to it.
- noindex, follow on non-canonical variants.
- Sitemap contains only the canonical URL.
- No robots.txt disallow (blocked pages can't emit canonical).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Extends the org-import env preflight so a template can declare an
alternative: satisfy ANY one member to pass. Motivated by the
Claude-family node case where either ANTHROPIC_API_KEY or
CLAUDE_CODE_OAUTH_TOKEN unlocks the agent — forcing both was wrong.
Server (workspace-server):
- New EnvRequirement union type with custom YAML + JSON
(un)marshaling. Accepts scalar (strict) or {any_of: [...]} in
both on-disk org.yaml and inline POST /org/import bodies.
- collectOrgEnv now returns []EnvRequirement. Dedups groups by
sorted-member signature. "Strict wins" pruning drops any-of
groups that mention a name already declared strictly (same
tier and cross-tier).
- Import preflight uses EnvRequirement.IsSatisfied — scalar =
exact match, group = any member present.
- Empty any_of: [] rejected at parse time (never-satisfiable).
- 14 handler tests (6 updated for the union shape, 8 new
covering any-of satisfaction, dedup, strict-dominates-group,
cross-tier pruning, invalid-member filtering, YAML round-trip,
and empty-any-of rejection).
Canvas:
- EnvRequirement = string | {any_of: string[]} with envReqMembers,
envReqSatisfied, envReqKey helpers.
- OrgImportPreflightModal renders strict rows and any-of groups
via a new AnyOfEnvGroup sub-component: "Configure any one"
banner, per-member input, ✓-satisfied indicator, and dimmed
siblings once any member is configured so the user can still
switch providers.
- TemplatePalette.OrgTemplate.required_env / recommended_env
retyped to EnvRequirement[]; passthrough to the modal
unchanged.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
## Problem
Two issues the external-workspace path was silently dropping:
1. `knownRuntimes` was a hardcoded Go map that drifted from
manifest.json — e.g. `gemini-cli` was in manifest but missing
from the Go allowlist, so any workspace provisioning with
runtime=gemini-cli got silently coerced to langgraph.
2. No end-to-end "bring your own compute" story. The canvas UI
had no way to pick runtime=external; the partial backend code
required the operator to already have a URL ready (chicken-and-
egg with the agent that doesn't exist yet), and no workspace_auth
_token was minted so the external agent couldn't authenticate its
register call.
## Change
### Runtime registry driven by manifest.json
- New `runtime_registry.go` reads `manifest.json` at service init.
Each `workspace_templates[].name` becomes a runtime identifier
(with the `-default` suffix stripped so `claude-code-default`
and `claude-code` resolve to the same runtime).
- `external` is always injected (no template repo exists for it).
- Falls back to a static map on manifest load failure so tests /
dev containers keep working.
- 5 new tests including a real-manifest sanity check.
### First-class external workspace flow
When `POST /workspaces` is called with `runtime: "external"` AND
no URL supplied:
1. Workspace row inserted with `status='awaiting_agent'`
(distinct from `provisioning` so canvas doesn't trip its
provisioning-timeout UX).
2. A workspace_auth_token is minted via `wsauth.IssueToken`.
3. Response body includes a `connection` object with:
- `workspace_id`, `platform_url`, `auth_token`
- `registry_endpoint`, `heartbeat_endpoint`
- `curl_register_template` — zero-dep one-shot register snippet
- `python_snippet` — full SDK setup w/ heartbeat loop,
paired with molecule-sdk-python PR #13's A2AServer
4. The platform URL is resolved from `EXTERNAL_PLATFORM_URL` env
(ops-configurable per tenant) or falls back to request headers.
The legacy `payload.External` + `payload.URL` path is preserved —
org-import and other callers that already have a URL still work.
### Canvas UI
- New "External agent (bring your own compute)" checkbox in
CreateWorkspaceDialog.
- When checked, template/model/hermes-provider fields are hidden
and the POST body includes `runtime: "external"`.
- New `ExternalConnectModal` component: shown once after create,
renders Python / curl / raw-fields tabs with copy-to-clipboard
buttons. Stays mounted as a sibling of the create dialog so the
token survives the create dialog unmount.
- `auth_token` is interpolated into the snippet client-side so the
copied block is truly ready to run — operator only has to fill
in their agent's public URL.
## Tests
- Go: 5 new runtime_registry tests (happy path, -default strip,
external always injected, missing file, malformed JSON, real
manifest sanity). All existing handler tests still pass.
- TypeScript: no type errors on my files; pre-existing
canvas-batch-partial-failure type drift is on main already and
tracked on the #2061 branch.
## Follow-ups (filed separately)
- Cut molecule-sdk-python v0.y to PyPI so the snippet can use
`pip install molecule-ai-sdk` instead of `git+main`.
- Add a `runtime: string` field per template in manifest.json so
one template can declare its runtime explicitly (instead of
deriving it from name conventions). Unblocks N-templates-per-
runtime (e.g. hermes-minimax, hermes-anthropic both runtime=hermes).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Builds on #2061. Three internally-cohesive sub-features; easiest to
read in order.
## 1. Org-level env preflight
Server
- `OrgTemplate` + `OrgWorkspace` gain `required_env: string[]` and
`recommended_env: string[]` YAML fields.
- `GET /org/templates` walks the tree and returns the tree-union
(deduped, sorted) of both. `collectOrgEnv` dedup prefers required
when the same key is declared at both tiers.
- `POST /org/import` preflights against `global_secrets` WHERE
`octet_length(encrypted_value) > 0` (empty-value rows used to be
counted as "configured" and the per-container preflight still
failed at start time). 412 Precondition Failed + `missing_env`
list when required keys are absent. `force=true` bypasses with
an audit log line. DB lookup failure now returns 500 (was:
silent fall-through that defeated the guard). Env-var NAMES
validated against `^[A-Z][A-Z0-9_]{0,127}$` so a malicious
template can't ship pathological names into the UI or DB.
Canvas
- New `OrgImportPreflightModal`: red "Required" section (blocking)
and yellow "Recommended" section (non-blocking, import stays
enabled, shows live missing-count next to the Import button).
- Per-key password input → `PUT /settings/secrets` → strike-through
on save. Functional `setDrafts` throughout (no stale-closure
clobbers on rapid successive saves). `useEffect` seed keyed on a
sorted-join string signature so a parent re-render with a new
array identity doesn't clobber typed inputs.
- `TemplatePalette.handleImport` branches: zero env declarations →
straight to import; any declarations → fetch configured global
secret keys, open the modal.
Tests (Go): `TestCollectOrgEnv_*` (5) cover union-across-levels,
required-wins-over-recommended (including same-struct), dedup,
empty, invalid-name rejection.
## 2. EmptyState parity with TemplatePalette
The "Deploy your first agent" grid used to call `POST /workspaces`
with no preflight while the sidebar palette ran
`checkDeploySecrets` + `MissingKeysModal` first. Same template
deployed two different ways → first-run users saw containers boot
in `failed` state without guidance. Now both surfaces share one
preflight + modal handshake.
EmptyState's previous `interface Template` dropped `runtime`,
`models`, and `required_env` — silently discarding exactly the
fields the preflight needs. `Template` now lives in
`deploy-preflight.ts` and is imported from there by both surfaces.
## 3. useTemplateDeploy hook
With the preflight + modal wiring now duplicated across
EmptyState + TemplatePalette + (going forward) any third surface,
extracted the pattern into `canvas/src/hooks/useTemplateDeploy.tsx`:
const { deploy, deploying, error, modal } = useTemplateDeploy({
canvasCoords: ..., // optional, default random
onDeployed: (id) => ...,
});
Closes three drift surfaces that the duplication had created:
- `resolveRuntime` id→runtime fallback table (moved to
`deploy-preflight.ts`). EmptyState had a narrower fallback that
would have silently disagreed with the palette on any future id
needing a non-identity mapping.
- `checkDeploySecrets` call signature. One owner.
- `MissingKeysModal` JSX wiring. One owner.
Narrow try/catch around `checkDeploySecrets` so a preflight network
failure clears `deploying` and surfaces via `setError` instead of
stranding the button forever. `modal: ReactNode` (not a
`renderModal()` function) — the previous memoization bought
nothing since consumers called it inline every render. Named
`MissingKeysInfo` interface for the state shape.
## 4. Viewport auto-fit user-pan gate fix
During org deploy the canvas was meant to pan+zoom to follow each
arriving workspace (`molecule:fit-deploying-org` event → debounced
fitView). In practice the fit stayed stuck on wherever the first
fit landed.
Root cause: React Flow v12 fires `onMoveEnd` with a truthy `event`
at the END of a programmatic `fitView` animation. The original
"respect-user-pan" gate stamped `userPannedAtRef` in `onMoveEnd`,
so our own fit completing looked like a user pan, and every
subsequent auto-fit short-circuited for the rest of the deploy.
Fix: stop trusting `onMoveEnd` for user-intent detection. Register
explicit `wheel` + `pointerdown` listeners on `document` with
capture phase and `target.closest('.react-flow__pane')` filter.
Capture-phase immunity to `stopPropagation`; pane-filter rejects
toolbar / modal / side-panel clicks (the old `window` fallback
caught those). `onMoveEnd` simplified to only drive the debounced
viewport save.
Also: fit event dispatched on root arrivals (not just children),
so the canvas centers on the just-landed root immediately instead
of waiting ~2s for the first child. Animation 600ms → 400ms so
successive per-arrival fits don't pile up visually. End-state fit
stays at 1200ms — intentional asymmetry ("settling" vs
"tracking"), documented in code.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- Merge origin/staging into fix/canvas-multilevel-layout-ux. 18 files
auto-merged (mostly canvas/tabs/chat and workspace-server handlers
the earlier DIRTY marker was stale relative to current staging).
- Fix 7 test failures surfaced by the merge:
1. Canvas.pan-to-node.test.tsx — mockGetIntersectingNodes was
inferred as vi.fn(() => never[]); mockReturnValueOnce of a node
object failed type check. Explicit return-type annotation.
2. Canvas.pan-to-node.test.tsx + Canvas.a11y.test.tsx — Canvas.tsx
reads deletingIds.size (new multilevel-layout state). Both mock
stores lacked deletingIds; added new Set<string>() to each.
3. canvas-batch-partial-failure.test.ts — makeWS() built a wire-
format WorkspaceData (snake_case, with x/y/uptime_seconds). The
store's node.data is now WorkspaceNodeData (camelCase, no wire-
only fields). Rewrote makeWS to produce WorkspaceNodeData and
updated 5 call-site casts. No assertions changed.
4. ConfigTab.hermes.test.tsx — two tests pinned pre-#2061 behavior
that the PR intentionally inverts:
a. "shows hermes-specific info banner" — RUNTIMES_WITH_OWN_CONFIG
now contains only {"external"}, so the banner is no longer
shown for hermes. Inverted assertion: now pins ABSENCE of
the banner, with a comment noting the inversion.
b. "config.yaml runtime wins over DB" — priority reversed:
DB is now authoritative so the tier-on-node badge matches
the form. Inverted scenario: DB=hermes + yaml=crewai →
form shows hermes. Switched test's DB runtime off langgraph
because the dropdown collapses langgraph into an empty-
valued "default" option that would hide the win signal.
- No production code changed — this commit is staging merge + test
realignment only. 953/953 canvas tests pass. tsc --noEmit clean.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Session's accumulated UX work across frontend and platform. Reviewable
in four logical sections — diff is large but internally cohesive
(each section fixes a gap the next one depends on).
## Chat attachments — user ↔ agent file round trip
- New POST /workspaces/:id/chat/uploads (multipart, 50 MB total /
25 MB per file, UUID-prefixed storage under
/workspace/.molecule/chat-uploads/).
- New GET /workspaces/:id/chat/download with RFC 6266 filename
escaping and binary-safe io.CopyN streaming.
- Canvas: drag-and-drop onto chat pane, pending-file pills,
per-message attachment chips with fetch+blob download (anchor
navigation can't carry auth headers).
- A2A flow carries FileParts end-to-end; hermes template executor
now consumes attachments via platform helpers.
## Platform attachment helpers (workspace/executor_helpers.py)
Every runtime's executor routes through the same helpers so future
runtimes inherit attachment awareness for free:
- extract_attached_files — resolve workspace:/file:///bare URIs,
reject traversal, skip non-existent.
- build_user_content_with_files — manifest for non-image files,
multi-modal list (text + image_url) for images. Respects
MOLECULE_DISABLE_IMAGE_INLINING for providers whose vision
adapter hangs on base64 payloads (MiniMax M2.7).
- collect_outbound_files — scans agent reply for /workspace/...
paths, stages each into chat-uploads/ (download endpoint
whitelist), emits as FileParts in the A2A response.
- ensure_workspace_writable — called at molecule-runtime startup
so non-root agents can write /workspace without each template
having to chmod in its Dockerfile.
Hermes template executor + langgraph (a2a_executor.py) + claude-code
(claude_sdk_executor.py) all adopt the helpers.
## Model selection & related platform fixes
- PUT /workspaces/:id/model — was 404'ing, so canvas "Save"
silently lost the model choice. Stores into workspace_secrets
(MODEL_PROVIDER), auto-restarts via RestartByID.
- applyRuntimeModelEnv falls back to envVars["MODEL_PROVIDER"]
so Restart propagates the stored model to HERMES_DEFAULT_MODEL
without needing the caller to rehydrate payload.Model.
- ConfigTab Tier dropdown now reads from workspaces row, not the
(stale) config.yaml — fixes "badge shows T3, form shows T2".
## ChatTab & WebSocket UX fixes
- Send button no longer locks after a dropped TASK_COMPLETE —
`sending` no longer initializes from data.currentTask.
- A2A POST timeout 15 s → 120 s. LLM turns routinely exceed 15 s;
the previous default aborted fetches while the server was still
replying, producing "agent may be unreachable" on success.
- socket.ts: disposed flag + reconnectTimer cancellation + handler
detachment fix zombie-WebSocket in React StrictMode.
- Hermes Config tab: RUNTIMES_WITH_OWN_CONFIG drops 'hermes' —
the adaptor's purpose IS the form, banner was contradictory.
- workspace_provision.go auto-recovery: try <runtime>-default AND
bare <runtime> for template path (hermes lives at the bare name).
## Org deploy/delete animation (theme-ready CSS)
- styles/theme-tokens.css — design tokens (durations, easings,
colors). Light theme overrides by setting only the deltas.
- styles/org-deploy.css — animation classes + keyframes, every
value references a token. prefers-reduced-motion respected.
- Canvas projects node.draggable=false onto locked workspaces
(deploying children AND actively-deleting ids) — RF's
authoritative drag lock; useDragHandlers retains a belt-and-
braces check.
- Organ cancel button (red pulse pill on root during deploy)
cascades via existing DELETE /workspaces/:id?confirm=true.
- Auto fit-view after each arrival, debounced 500 ms so rapid
sibling arrivals coalesce into one fit (previous per-event
fit made the viewport lurch continuously).
- Auto-fit respects user-pan — onMoveEnd stamps a user-pan
timestamp only when event !== null (ignores programmatic
fitView) so auto-fits don't self-cancel.
- deletingIds store slice + useOrgDeployState merge gives the
delete flow the same dim + non-draggable treatment as deploy.
- Platform-level classNames.ts shared by canvas-events +
useCanvasViewport (DRY'd 3 copies of split/filter/join).
## Server payload change
- org_import.go WORKSPACE_PROVISIONING broadcast now includes
parent_id + parent-RELATIVE x/y (slotX/slotY) so the canvas
renders the child at the right parent-nested slot without doing
any absolute-position walk. createWorkspaceTree signature gains
relX, relY alongside absX, absY; both call sites updated.
## Tests
- workspace/tests/test_executor_helpers.py — 11 new cases
covering URI resolution (including traversal rejection),
attached-file extraction (both Part shapes), manifest-only
vs multi-modal content, large-image skip, outbound staging,
dedup, and ensure_workspace_writable (chmod 777 + non-root
tolerance).
- workspace-server chat_files_test.go — upload validation,
Content-Disposition escaping, filename sanitisation.
- workspace-server secrets_test.go — SetModel upsert, empty
clears, invalid UUID rejection.
- tests/e2e/test_chat_attachments_e2e.sh — round-trip against
a live hermes workspace.
- tests/e2e/test_chat_attachments_multiruntime_e2e.sh — static
plumbing check + round-trip across hermes/langgraph/claude-code.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Lark adapter was already implemented in Go (lark.go — outbound Custom Bot
webhook + inbound Event Subscriptions with constant-time token verify),
but the Canvas connect-form hardcoded a Telegram-shaped pair of inputs
(bot_token + chat_id). Selecting "Lark / Feishu" from the dropdown
silently sent the wrong field names — there was no way to enter a
webhook URL.
Fix: move form shape to the server.
- Add `ConfigField` struct + `ConfigSchema()` method to the
`ChannelAdapter` interface. Each adapter declares its own fields with
label/type/required/sensitive/placeholder/help.
- Implement per-adapter schemas:
- Lark: webhook_url (required+sensitive) + verify_token (optional+sensitive)
- Slack: bot_token/channel_id/webhook_url/username/icon_emoji
- Discord: webhook_url + optional public_key
- Telegram: bot_token + chat_id (unchanged UX, keeps Detect Chats)
- Change `ListAdapters()` to return `[]AdapterInfo` with config_schema
inline. Sorted deterministically by display name so UI ordering is
stable across Go's random map iteration.
- Update the 3 existing `ListAdapters` test sites to struct access.
Canvas (`ChannelsTab.tsx`):
- Replace the two hardcoded bot_token/chat_id inputs with a single
schema-driven `SchemaField` component. Renders one input per field in
the order the adapter returns them.
- Form state becomes `formValues: Record<string,string>` keyed by
`ConfigField.key`. Values reset on platform-switch so stale
Telegram credentials can't leak into a new Lark channel.
- "Detect Chats" stays but only renders for platforms in
`SUPPORTS_DETECT_CHATS` (Telegram only — the only provider with
getUpdates).
- Only schema-known keys are posted in `config`, scrubbing any stale
values from previous platform selections.
Regression tests:
- `TestLark_ConfigSchema` locks in the 2-field Lark contract with the
required/sensitive flags correctly set.
- `TestListAdapters_IncludesLark` confirms registry wiring + schema
survives round-trip through ListAdapters.
Known pre-existing `TestStripPluginMarkers_AwkScript` failure in
internal/handlers is unrelated to this change (verified via stash+test
on clean staging).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Ships the monorepo side of molecule-core#1957 (agent identity collapse).
Companion to molecule-ai-plugin-gh-identity (new repo, merged-and-tagged
separately).
Changes:
- manifest.json: add gh-identity plugin to Tier 1 registry
- workspace-server/go.mod: require github.com/Molecule-AI/molecule-ai-plugin-gh-identity
- cmd/server/main.go: build a shared provisionhook.Registry, register
gh-identity first (always), then github-app-auth (gated on GITHUB_APP_ID)
- workspace_provision.go: propagate workspace.Role into
env["MOLECULE_AGENT_ROLE"] before calling the mutator chain, so the
gh-identity plugin can see which agent is booting
- provisionhook/mutator.go: add Registry.Mutators() accessor so
individual-plugin registries can be merged onto a shared one at boot
Boot log gains a line like:
env-mutator chain: [gh-identity github-app-auth]
Effect per workspace:
- env contains MOLECULE_AGENT_ROLE, MOLECULE_OWNER, MOLECULE_ATTRIBUTION_BADGE,
MOLECULE_GH_WRAPPER_B64, MOLECULE_GH_WRAPPER_SHA
- Each workspace template's install.sh can decode + install the wrapper at
/usr/local/bin/gh, intercepting @me assignment and prepending agent
attribution on PR/issue creates
Does not break existing workspaces — absent workspace.role, the plugin is
a no-op. Absent install.sh updates in each template, the env vars are
simply unused.
Follow-up template PRs (hermes, claude-code, langgraph, etc.) each add
~15 lines to install.sh to decode + install the wrapper.
Ref: #1957
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
PR #2021 follow-up: add TEST-NET reserved ranges and IPv6 documentation
prefix to validateAgentURL blocklist in all SaaS/self-hosted modes.
RFC 5737 reserves 192.0.2.0/24, 198.51.100.0/24, and 203.0.113.0/24 for
documentation and example code — no production agent has a legitimate
reason to use them. RFC 3849 designates 2001:db8::/32 as the IPv6
documentation prefix. All are blocked unconditionally.
Also adds 8 regression test cases covering each blocked range.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
P0 security: CanvasOrBearer final else branch aborts with 401 but
continues execution to c.Next() — allowing the downstream handler to
overwrite the 401 response. Regression tests added to verify the handler
is not called after AbortWithStatusJSON in both no-cred and wrong-origin
paths.
Confirmed on origin/main @ 69408ab6 and origin/staging @ 6b62391e.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Two stalls in cycle 132 traced to the same root cause: activity_logs
INSERTs were wedging on invalid UTF-8 bytes (observed: 0xe2 0x80 0x2e)
and the surrounding DB operations had no deadlines, so a single stuck
transaction blocked wg.Wait() in tick() and stalled the whole scheduler
until a container restart.
Root cause: truncate() did byte-slicing without UTF-8 boundary checks.
A prompt containing U+2026 (`…` = 0xe2 0x80 0xa6) at byte ~197 was
sliced at maxLen-3, producing the trailing fragment 0xe2 0x80 followed
by '.' (0x2e) from the "..." suffix — Postgres rejects this as invalid
UTF-8 for jsonb, holds the transaction open, and the INSERT never
returns.
Fix:
- truncate(): UTF-8 safe — backs up to a rune boundary via utf8.RuneStart
- sanitizeUTF8(): new helper applied to every agent-produced string
before it crosses the DB boundary (prompt, error detail, schedule name)
- dbQueryTimeout = 10s on every scheduler DB call:
- tick() due-schedules query
- capacity-check queries in fireSchedule
- empty-run counter UPDATE / reset
- activity_logs INSERTs (fireSchedule + recordSkipped)
- recordSkipped bookkeeping UPDATE
- Bookkeeping writes use context.Background() parent (F1089 pattern)
so fireTimeout / shutdown cancellation can't silently skip the UPDATE.
Regression tests lock in the 0xe2 0x80 0x2e wedge: truncate() is
verified UTF-8-valid and never produces that byte sequence even when
input contains a multi-byte rune at the cut position.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The validateAgentURL function was missing several ranges from the always-
blocked list. In SaaS mode only link-local, loopback, and IPv6 metadata
were blocked — TEST-NET (192.0.2/24, 198.51.100/24, 203.0.113/24),
CGNAT (100.64.0.0/10), IPv4 multicast (224.0.0.0/4), and fc00::/8 (IPv6
ULA non-routable prefix) were allowed through.
These ranges are never valid agent URLs in any deployment:
- TEST-NET (RFC-5737): documentation-only, no real hosts
- CGNAT (RFC-6598): never used as VPC subnets on AWS/GCP/Azure
- IPv4 multicast: never a unicast agent endpoint
- fc00::/8: non-routable prefix (fd00::/8 stays allowed in SaaS mode)
Also tighten the non-SaaS ULA block: instead of blocking fc00::/7 (the
supernet covering both fc00 and fd00), split it into always-blocked
fc00::/8 (above) + non-SaaS-only fd00::/8. This makes the SaaS relaxation
explicit and auditable.
Fixes TestValidateAgentURL_SaaSMode_StillBlocksMetadataEtAl failure.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
PR #1885 introduced a regression: HandleConnect called wsauth.ValidateToken
for any bearer token when X-Workspace-ID ≠ workspaceID. Org-scoped tokens
(org_api_tokens table) are not in workspace_auth_tokens, so ValidateToken
always returned ErrInvalidToken for them → hard 401 for all A2A routing
that uses org tokens.
Fix: if WorkspaceAuth already validated an org token (org_token_id set in
gin context by orgtoken.Validate), skip the workspace_auth_tokens lookup and
trust the X-Workspace-ID claim. Hierarchy enforcement via canCommunicateCheck
is unchanged — org token holders are still subject to the workspace hierarchy.
Workspace-scoped tokens continue to require ValidateToken binding. Invalid
tokens (neither workspace-bound nor org-level) still return 401. This closes
the regression while preserving the KI-005 security property.
Add TestKI005_OrgToken_SkipsValidateToken to terminal_test.go as a regression
guard for this exact path.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Issue #1786: SSRF test gap — inner helpers (isPrivateOrMetadataIP,
validateAgentURL blockedRanges) were tested in isolation but the public
wrappers never called saasMode(), allowing the regression to pass unit
tests while production returned 502 on every A2A call from Docker/VPC
deployments (PR #1785).
Adds integration-level wrapper tests for both functions across all
saasMode() resolution ladder cases:
- SaaS explicit (MOLECULE_DEPLOY_MODE=saas): RFC-1918 + fd00 ULA allowed
- Strict mode (MOLECULE_DEPLOY_MODE=self-hosted): RFC-1918 blocked
- Legacy org-ID fallback (MOLECULE_ORG_ID set, no DEPLOY_MODE):
RFC-1918 + fd00 ULA allowed
- Always-blocked ranges (metadata, loopback, TEST-NET, CGNAT, fc00 ULA)
stay blocked in every mode
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Ships the monorepo side of molecule-core#1957 (agent identity collapse).
Companion to molecule-ai-plugin-gh-identity (new repo, merged-and-tagged
separately).
Changes:
- manifest.json: add gh-identity plugin to Tier 1 registry
- workspace-server/go.mod: require github.com/Molecule-AI/molecule-ai-plugin-gh-identity
- cmd/server/main.go: build a shared provisionhook.Registry, register
gh-identity first (always), then github-app-auth (gated on GITHUB_APP_ID)
- workspace_provision.go: propagate workspace.Role into
env["MOLECULE_AGENT_ROLE"] before calling the mutator chain, so the
gh-identity plugin can see which agent is booting
- provisionhook/mutator.go: add Registry.Mutators() accessor so
individual-plugin registries can be merged onto a shared one at boot
Boot log gains a line like:
env-mutator chain: [gh-identity github-app-auth]
Effect per workspace:
- env contains MOLECULE_AGENT_ROLE, MOLECULE_OWNER, MOLECULE_ATTRIBUTION_BADGE,
MOLECULE_GH_WRAPPER_B64, MOLECULE_GH_WRAPPER_SHA
- Each workspace template's install.sh can decode + install the wrapper at
/usr/local/bin/gh, intercepting @me assignment and prepending agent
attribution on PR/issue creates
Does not break existing workspaces — absent workspace.role, the plugin is
a no-op. Absent install.sh updates in each template, the env vars are
simply unused.
Follow-up template PRs (hermes, claude-code, langgraph, etc.) each add
~15 lines to install.sh to decode + install the wrapper.
Ref: #1957
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two changes:
1. a2a_proxy.go: non-2xx agent responses now return a proxyErr so
DrainQueueForWorkspace calls MarkQueueItemFailed (not silently
marking completed). Previously, agent 5xx responses returned
(status, body, nil) and DrainQueueForWorkspace's final fallback
called MarkQueueItemCompleted for anything not 202/proxyErr.
Also extracts error string from JSON response body before
falling back to http.StatusText.
2. a2a_queue_test.go: fixes for broken queue drain tests:
- Switch to QueryMatcherEqual (exact string) from MatchSs (v1.5.2
API: QueryMatcherOption(QueryMatcherEqual))
- Add github.com/Molecule-AI/molecule-monorepo/platform/internal/db import
- drainSetup(t, workspaceID): registers budget-check expectation
via expectQueueBudgetCheck helper; callers call it AFTER
expectDequeueNextOk (DequeueNext runs before proxyA2ARequest)
- drainItem: use NULL CallerID so CanCommunicate is skipped
(avoids needing hierarchy mocks)
- add allowLoopbackForTest() so httptest.Server URLs pass SSRF guard
- Sequential claim-guarding test instead of concurrent goroutine
(sqlmock is not goroutine-safe for ordered expectations)
Also adds the nil-safe error extraction regression tests from
the original PR #2012 test plan.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Extends the skeletal a2a_queue_test.go from PR #1892 with:
- sqlmock-based tests for EnqueueA2A idempotency (ON CONFLICT DO NOTHING)
- Tests for DequeueNext (SELECT FOR UPDATE SKIP LOCKED, FIFO/priority order)
- Tests for MarkQueueItemCompleted and MarkQueueItemFailed (attempt bounding)
- DrainQueueForWorkspace nil-safe error extraction regression test: the
unchecked proxyErr.Response["error"].(string) type assertion in the
original Phase 1 caused a panic when the "error" key was absent or
non-string (GH incident). This test pins the defensive .(string)
guard and the fallback to http.StatusText.
- Priority constant ordering sanity checks.
- extractIdempotencyKey edge cases: malformed JSON, missing fields,
empty messageId, and the successful messageId extraction path.
Uses alicebob/miniredis for Redis setup matching the existing
setupTestRedis pattern in this package.
orgtoken.Validate() runs a synchronous UPDATE org_api_tokens SET
last_used_at after every successful auth scan. Tests were missing the
sqlmock ExpectExec for this call — the code discards the error
(_, _ = ExecContext) so CI passed, but ExpectationsWereMet() could
not detect a regression where the UPDATE was accidentally removed.
Adds strict mock expectations for all four WorkspaceAuth+org-token
test cases: SetsOrgIDContext, OrgIDNULL_DoesNotSetContext,
DBRowScanError_DoesNotPanic, and SetsAllContextKeys.
Fixes: GH#1774
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The test was passing "/old-file.txt" (with leading slash) which now triggers
the filepath.IsAbs guard in DeleteFile before the DB lookup, returning 400
instead of the expected 404. Use a relative path so the DB lookup is reached.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Add filepath.IsAbs guard in DeleteFile BEFORE the leading-slash strip so that
absolute paths like "/etc/passwd" are rejected with 400 rather than silently
accepted after the prefix is stripped.
- Remove the null_byte sub-case from TestCWE78_DeleteFile_TraversalVariants —
httptest.NewRequest panics on \x00 in URLs (URL-layer concern, not handler).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Replace string concatenation with safe exec-form path construction in
two remaining locations in templates.go:
1. DeleteFile (container-running path):
- Before: `containerPath := "/configs/" + filePath` → `rm -rf containerPath`
- After: `rm -f filepath.Join("/configs", filePath)`
- Also tightens rm flag from -rf to -f (no recursive delete on a file endpoint)
2. SharedContext (container-running path, per-file cat loop):
- Before: `[]string{"cat", "/configs/" + relPath}`
- After: `[]string{"cat", "/configs", relPath}` (separate args, no shell join)
In both cases validateRelPath is already the primary guard (rejects traversal
inputs before reaching exec). filepath.Join / separate args is defence-in-depth
so that a bypass of validateRelPath cannot produce a dangerous concatenated path
in the exec argument list.
ReadFile was already fixed (PR #1885, merged to main at 12:08Z).
Regression tests added:
- TestCWE78_DeleteFile_TraversalVariants: 7 traversal patterns all → 400
- TestCWE78_SharedContext_SkipsTraversalPaths: traversal paths in
shared_context config are silently skipped, only safe files returned
Fixes: #2011
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
`setupTestDB` was calling `setSSRFCheckForTest(false)` without restoring
the previous value, causing all subsequent `TestIsSafeURL_*` tests to run
with SSRF disabled and pass unconditionally — masking real validation
failures.
Replace the fire-and-forget call with a `t.Cleanup(restore)` so the flag
is restored to its original state after each test that calls `setupTestDB`.
Fixes: CI Platform (Go) failures — 20+ TestIsSafeURL_* tests failing on
core-fe-ki005-regression-tests (PR #1996).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
CWE-22 path traversal in restartTemplateInput Tier 4: dbRuntime was joined
directly into the template path without sanitisation.
runtimeTemplate := filepath.Join(configsDir, dbRuntime+"-default")
An attacker holding a workspace token could set runtime to a path-traversal
string (e.g. "../../../etc") via the PATCH /workspaces/:id Update handler,
which only validates length and newlines. If a matching directory existed
on the host (e.g. /configs/../../../etc-default), the restart would load
files from an arbitrary host path into the workspace container.
Fix: call sanitizeRuntime(dbRuntime) — the existing allowlist in
workspace_provision.go — before filepath.Join. Unknown values are
remapped to "langgraph", so the attacker cannot choose an arbitrary host
path. Defense-in-depth: the path is still inside configsDir after
sanitisation.
Regression tests added:
- CWE-22 traversal strings fall through to existing-volume
- langgraph-default is used when traversal string is sanitised to langgraph
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- container_files.go: move validateRelPath before h.docker==nil check in
deleteViaEphemeral so F1085 traversal tests fire even when Docker is
absent in CI (fixes TestDeleteViaEphemeral_F1085_RejectsTraversal)
- a2a_proxy_test.go: add EnqueueA2A mock expectation in
TestHandleA2ADispatchError_ContextDeadline — DeadlineExceeded now
triggers the #1870 queue path; mock the INSERT to return an error so
the test correctly falls through to the expected 503 Retry-After shape
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Audit 2026-04-24 case: org-templates/molecule-dev/ contained only .git/
(working tree wiped). ListTemplates silently skipped the directory and
the molecule-dev template silently disappeared from the Canvas palette.
No log trail; CEO discovered hours later when looking for the registry
listing manually.
This commit adds a one-line log warning when a directory under orgDir
has a .git/ subdir but no org.yaml/.yml — that's almost always a manifest
clone that got truncated. The warning includes the recovery command
(`git checkout main -- .`) so operators can self-fix without re-cloning.
Doesn't change the response behavior — the directory is still skipped
to keep ListTemplates a fail-soft endpoint. Just makes the failure
visible in `docker logs platform`.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>