Three complementary regression tests for the chain of P0s fixed today.
Each targets a specific bug class that reached production, and will
fire loud if any of them regress.
## 1. E2E A2A assertion enhancements (tests/e2e/test_staging_full_saas.sh)
The existing A2A check looked for "error|exception" in the response text,
which was too broad and missed the actual error patterns we hit. Now
matches each known error class individually with a diagnostic fail
message pointing at the exact bug:
- "[hermes-agent error 401]" → hermes #12 (API_SERVER_KEY)
- "hermes-agent unreachable" → gateway process died
- "model_not_found" → hermes #13 (model prefix)
- "Encrypted content is not supported" → hermes #14 (api_mode)
- "Unknown provider" → bridge PROVIDER misconfig
Also asserts the response contains the PONG token the prompt asked for —
catches silent-truncation/echo regressions.
## 2. Hermes install.sh bridge shell harness (tools/test-hermes-bridge.sh)
4 scenarios × 16 assertions, all offline (no docker, no network):
- openai-bridge-happy: OPENAI_API_KEY + openai/gpt-4o →
provider=custom, model="gpt-4o" (prefix stripped),
api_mode=chat_completions
- operator-custom-wins: explicit HERMES_CUSTOM_* → bridge skipped
- openrouter-not-touched: OPENROUTER_API_KEY → provider=openrouter,
slug kept
- non-prefixed-model: bare "gpt-4o" → prefix-strip is a no-op
Runs in <1s, can be wired into template-hermes CI. Pins the exact
config.yaml shape — any drift in derive-provider.sh or the bridge
if-block breaks a test.
## 3. Canvas ConfigTab hermes tests (ConfigTab.hermes.test.tsx)
5 vitest cases covering the #1894 bugs:
- Runtime loads from workspace metadata when config.yaml missing
- "No config.yaml found" red error hidden for hermes
- Hermes info banner shown instead
- Langgraph workspace still sees the red error (regression-guard the
other way)
- config.yaml runtime wins over workspace metadata when present
## Running
bash tools/test-hermes-bridge.sh # 16 assertions
cd canvas && npx vitest run src/components/tabs/__tests__/ConfigTab.hermes.test.tsx # 5 cases
# E2E enhancements ride on the existing staging E2E workflow
## Not yet covered (tracked in #1900)
CP admin delete-tenant EC2 cascade, cp-provisioner instance_id
lookup (#1738), purge audit SQL mismatch (#241), and pq prepared-
statement cache collision (#242). These are in-controlplane-repo
concerns — separate PR with CP-side sqlmock + integration tests.
Closes items in #1900.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Today's E2E run 24864011116 timed out at 10 min waiting for workspace
to reach online. Hermes cold-boot measured 13 min on the same day's
apt mirror (my manual repro on 18.217.175.225). The original 10 min
deadline was a ~2x too-tight budget.
Also: the `failed` branch was a hard fail, but bootstrap-watcher
(cp#245) marks workspace=failed at 5 min if install.sh hasn't
finished yet. Heartbeat then transitions failed → online around
10-13 min. Pre this fix, the E2E bailed at the failed read and
missed the recovery that was seconds away.
## Changes
- Deadline: 10 min → 20 min (hermes worst-case 15 + slack)
- `failed` status: now tolerated as transient; loop logs once then
keeps polling. Only hard-fails at the final deadline.
- Added transition logging (`WS_LAST_STATUS`) so CI output shows
the provisioning → failed → online flow instead of silent polling.
## Why not fix cp#245 instead
Both should be fixed. cp#245 (bootstrap-watcher deadline) is the
root cause; this E2E fix is the defense-in-depth. When cp#245 lands,
the `failed` transient log will stop firing but the rest of the
logic still protects against other slow-apt-day spikes.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two bugs surfaced while testing Claude Code + OAuth deploys:
1. A2A proxy: a2aClient had a 60s Client.Timeout "safety net" that
defeated the per-request context deadlines the code otherwise sets
(canvas = 5m, agent-to-agent = 30m). Claude Code's first-token cold
start over OAuth takes 30-60s, so every first "hi" into a fresh
claude-code workspace returned 503 at exactly the 1m mark. Removed
the Client.Timeout — the context deadline now governs as documented
in the adjacent comment.
2. Files tab: ReadFile ran `cat <rootPath> <filePath>` as two args to
cat. `cat /home agent/turtle_draw.py` tries to read the rootPath
directory (errors "Is a directory") and then resolves the filePath
relative to the container cwd, which is not guaranteed to equal
rootPath. Result: the file-content pane stayed blank even though
the file listed fine. Join into a single path before exec.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The side-panel runtime pill read "unknown" for newly-deployed workspaces
because canvas-events.ts created the node from WORKSPACE_PROVISIONING
payload — and the payload only carried name + tier. No refetch filled
the gap during provisioning, so the user saw "RUNTIME unknown" on the
card even though the DB row had the real runtime set.
Includes runtime in every WORKSPACE_PROVISIONING emitter:
* handlers/workspace.go — initial create
* handlers/workspace_restart.go — explicit restart, auto-restart, and
crash-recovery resume loop
* handlers/org_import.go — multi-workspace org imports
Canvas-side: canvas-events.ts reads payload.runtime when creating the
node; the provisioning test asserts the pill value is populated before
any refetch.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The MissingKeysModal's provider list was hardcoded in deploy-preflight.ts
as RUNTIME_PROVIDERS — a per-runtime map that duplicated what each
template repo already declares in its config.yaml. That meant adding a
new provider required changes in two places, and the UI could drift out
of sync with the actual template (e.g. when a template adds a MiniMax or
Kimi model, the picker wouldn't know).
The single source of truth for "which env vars does this workspace need"
is each template's config.yaml:
* `runtime_config.models[].required_env` — per-model key list
* `runtime_config.required_env` — runtime-level AND list
Go /templates already returned `models`. This change:
* Adds `required_env` alongside `models` on templateSummary so the
canvas receives the full picture.
* Rewrites deploy-preflight.ts to derive ProviderChoice[] from a
template object via `providersFromTemplate(template)`:
- groups `models[]` by unique required_env tuple
- falls back to runtime_config.required_env when models is empty
- decorates labels with model counts (e.g. "OpenRouter (14 models)")
* `checkDeploySecrets(template, workspaceId?)` now takes a template
object instead of a runtime string. Any-provider satisfaction still
short-circuits preflight to ok=true.
* MissingKeysModal receives `providers` directly; no more lookups.
* TemplatePalette threads `template.models` + `template.required_env`
into the preflight.
Side effects:
* Claude Code's dual-auth (OAuth token OR Anthropic API key) now
surfaces as two picker options — its config.yaml already declared
both, the UI just wasn't reading them.
* Hermes picker now shows 8 provider options (Nous, OpenRouter,
Anthropic, Gemini, DeepSeek, GLM, Kimi, Kilocode) instead of the
hand-picked 3, matching its 35-model reality.
Removed the legacy RUNTIME_PROVIDERS / RUNTIME_REQUIRED_KEYS /
getRequiredKeys / findMissingKeys exports; MissingKeysModal.test.tsx
deleted (its coverage is subsumed by the new template-driven
deploy-preflight.test.ts). 58 modal-adjacent tests pass; full canvas
suite 919 pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This monorepo is public. Internal content (positioning, competitive
briefs, sales playbooks, PMM/press drip, draft campaigns) belongs in
Molecule-AI/internal — never here.
## What this PR removes
/research/ (3 competitive briefs)
/marketing/ (45 files: assets, audio, community, copy,
demos, devrel, drip, pmm, press, sales)
/docs/marketing/ (31 draft campaign / blog / brief files)
comment-1172.json + comment-1173.json
test-pmm-temp.txt
tick-reflections-temp.md
83 files removed, 7,141 lines deleted from public history (going forward —
historical commits remain visible in this repo's git log).
## Companion: internal repo absorption
Molecule-AI/internal PR `chore/migrate-monorepo-internal-content-2026-04-23`
absorbs all 79 files into `from-monorepo-2026-04-23/` for curator triage
into the existing internal/marketing/ tree. Bulk-dump avoids file-collision
on overlapping subdirs (audio, devrel, pmm).
## Three-layer enforcement so this can't recur
1. .gitignore — blocks `git add` of /research, /marketing, /docs/marketing,
/comment-*.json, *-temp.{md,txt}, /test-pmm-*, /tick-reflections-*
2. .github/workflows/block-internal-paths.yml — CI hard gate. Fails any PR
that adds a forbidden path. Cannot be silently bypassed.
3. docs/internal-content-policy.md — canonical decision tree for agents
and humans. Linked from the CI failure message.
A separate PR on molecule-ai-org-template-molecule-dev updates SHARED_RULES
to teach every agent role to write internal content directly to
Molecule-AI/internal via gh repo clone + commit + PR (the prevention-at-
source layer; this PR is the mechanical backstop).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Reverts the `.gitignore` checkin-exception for molecule-dev that let it
creep back on every main↔staging sync. Keeping this dir in core meant:
- 800KB of template files shipping with every monorepo clone
- Confusion about which copy is canonical (this one vs the standalone
Molecule-AI/molecule-ai-org-template-dev repo)
- Merge churn — 0506e0c re-added it against #6e6de39's removal intent
just by taking 'theirs' in a conflict resolution
All org-templates now live in their own repos, fetched via
scripts/clone-manifest.sh when needed locally. molecule-dev has no
special status; it's the same shape as every other org template.
The .gitignore rule is now a simple `/org-templates/` with no exceptions,
matching the rule structure already used for `/plugins/` and
`/workspace-configs-templates/`. Future conflict resolutions can't re-add
by accident because git won't track anything under that path.
User flagged this at session start 2026-04-23 ('org-templates should only
exist as standalone template repo'). Fixing for real this time.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Runtimes like Hermes and LangGraph accept any one of several LLM
provider keys (OpenRouter OR OpenAI OR Anthropic OR Nous-native).
Before this change, the missing-keys modal treated all supported
providers as simultaneously required — a fresh user on Hermes was
asked for three parallel API keys when any one suffices.
Introduces RUNTIME_PROVIDERS in deploy-preflight.ts as the canonical
per-runtime provider list (label, envVar, note). checkDeploySecrets
now returns all alternatives as missingKeys when nothing is
configured, so the modal can offer a picker.
MissingKeysModal dispatches between two render paths:
* ProviderPickerModal — radio list of supported providers, a single
env input for the chosen one. Saving that one key satisfies the
preflight. Activated whenever the runtime has ≥2 provider choices.
* AllKeysModal — legacy parallel-inputs UX, all keys must be saved
before deploy. Kept for single-provider runtimes (claude-code,
gemini-cli) and callers that pass unrelated-key lists.
Dual-mode preserves the pre-existing contract for every caller while
fixing the multi-provider UX. All 930 canvas vitest tests pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The TemplatePalette's Org Templates section rendered all cards
inline, each ~120 px tall (name + description + "Import org" button).
With 4 org templates on disk that's ~500 px of drawer height — the
individual workspace templates at the top (AutoGen / LangGraph /
Hermes / …) got pushed off-screen, which is the exact complaint from
the test session ("templates still 90% org, cant even see normal
workspace template").
Collapsed the Org Templates section by default. The header now
toggles with an ▶ caret and shows the count ("Org Templates (4)").
Clicking expands to reveal the full card list; clicking again
collapses. Persists only within a session — fresh mounts start
collapsed so the primary deploy path stays visible.
Individual workspace templates are the usual starting point (pick a
runtime, deploy one agent), while org templates are a heavier
"deploy this whole pre-built team" action. Making the second
expandable matches the relative frequency.
- `TemplatePalette.tsx::OrgTemplatesSection` — added `expanded`
state (default false), wrapped the cards in `{expanded && …}`,
turned the header into a toggle button with `aria-expanded` +
`aria-controls`.
- `__tests__/OrgTemplatesSection.test.tsx` — 3 new rendering tests:
collapsed-by-default (cards absent), click expands (cards appear),
click again collapses (cards gone). Mocks /org/templates with a
2-entry response so the count assertion is stable.
Full canvas vitest: 930/930 pass (up from 927).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
### Two unrelated but small UI fixes surfaced while testing the Canvas
**1. Legend hidden under the open TemplatePalette.**
Legend is `fixed bottom-6 left-4 z-30`. TemplatePalette's drawer (when
open) is `fixed top-0 left-0 w-[280px] z-30` — same z-index, same
left-edge column. The Legend overlapped the palette's bottom 180 px.
Published the palette-open state to the canvas store so the Legend
can shift right (to `left-[296px]` — 280 px palette + 16 px gap) while
the palette is open, animated via a 200 ms `transition-[left]` to
match the palette's slide. Closes cleanly back to `left-4` when the
palette is dismissed.
Files:
- `store/canvas.ts` — added `templatePaletteOpen` + `setTemplatePaletteOpen`.
- `TemplatePalette.tsx` — calls `setTemplatePaletteOpen(open)` on
every open/close transition via a new useEffect.
- `Legend.tsx` — reads the flag and swaps `left-4` <-> `left-[296px]`.
**2. "WebSocket is closed before the connection is established" spam.**
Two components (`ChatTab`, `AgentCommsPanel`) open their own short-
lived WebSocket to tail the ACTIVITY_LOGGED stream. Their cleanup
path called `ws.close()` unconditionally, which trips a browser
console warning when React StrictMode re-runs the effect in dev and
the handshake hasn't completed yet. Confirmed via DevTools console
on the running canvas.
Added a `closeWebSocketGracefully(ws)` helper in `lib/ws-close.ts`:
- OPEN / CLOSING → close immediately (normal path).
- CONNECTING → defer close to the 'open' listener so the
browser sees a full handshake. Also wires an
'error' listener that cancels the queued close
if the handshake fails (no double-close).
- CLOSED → no-op.
Both consumers now call the helper in their useEffect cleanup.
Silences the warning without changing observable behaviour.
### Tests
`canvas/src/lib/__tests__/ws-close.test.ts` — 5 cases with a fake
WebSocket covering each readyState branch plus the error-before-open
cancellation path. Full vitest suite: 927/927 pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
When a workspace is selected the SidePanel (fixed, right-0, z-50)
opens from the right edge and covers the right third of the
viewport. The Toolbar at the top was positioned
`fixed top-3 left-1/2 -translate-x-1/2 z-20` — centred on the full
viewport, not the remaining canvas area. Consequence: the right half
of the Toolbar (Audit / Search / Help / Settings) was hidden behind
the panel as soon as the user clicked any workspace.
Fix: publish the live SidePanel width to the canvas store and read
it in Toolbar. When a node is selected, shift the Toolbar LEFT by
`sidePanelWidth / 2` so its centre lines up with the middle of the
remaining canvas area. Animated via a 200 ms `transition-[margin-left]`
to match the SidePanel's own slide-in easing.
- `store/canvas.ts` — added `sidePanelWidth` + `setSidePanelWidth`.
Default 480 (matches SIDEPANEL_DEFAULT_WIDTH).
- `SidePanel.tsx` — calls `setSidePanelWidth(width)` on every width
change so the store stays in sync with localStorage.
- `Toolbar.tsx` — reads `sidePanelWidth`, applies a negative
`marginLeft` style when `selectedNodeId` is non-null.
- `SidePanel.tabs.test.tsx` — added `setSidePanelWidth: vi.fn()` to
the mocked store state so SidePanel's new useEffect has a callable
to invoke. 18 previously-passing tests now pass again.
No visual regression when no workspace is selected — the toolbar
stays in its original centred position. SaaS canvas unchanged.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(a2a-queue): nil-safe error extraction in DrainQueueForWorkspace + handle 202-requeue
The drain path called proxyErr.Response["error"].(string) without a comma-
ok assertion. When proxyErr.Response had no "error" key (which happens in
the 202-Accepted-queued branch I added in the same PR — that response is
{"queued": true, "queue_id": ..., "queue_depth": ...}), the type assertion
panicked and killed the platform process.
The platform was down 25 minutes today before this was diagnosed. Fleet
went from 30 real outputs/15min → 0 events.
Two fixes here:
1. Treat 202 Accepted from the inner proxyA2ARequest as "re-queued"
(target was busy AGAIN). Mark THIS attempt completed; the new queue
row will be drained on the next heartbeat tick. Don't propagate as
failure.
2. Defensive type-assertion when reading the error string. Falls back to
http.StatusText, then a generic "unknown drain dispatch error" so the
queue still gets a non-empty error_detail for ops debugging.
Now the drain path can never panic on a malformed proxy response.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(a2a-queue): return (202, body, nil) so callers see queued-as-success
Cycle 53 found callers logging 45× 'delegation failed: proxy a2a error'
even though the queue's drain stats showed 48 completions in the same
window. Investigation: my busy-error path returned
return http.StatusAccepted, nil, &proxyA2AError{Status: 202, Response: ...}
The non-nil proxyA2AError is the failure signal. Even with status=202,
callers' `if proxyErr != nil` branch fires and logs the request as
failed. The 202 status was meaningless — the response body was nil too,
so the caller never even saw the queue_id/depth metadata.
Fix: return success-shape so callers do NOT enter the error branch:
respBody, _ := json.Marshal(gin.H{"queued": true, "queue_id": qid, ...})
return http.StatusAccepted, respBody, nil
Net effect: queue continues to absorb busy-errors (working since #1893),
AND callers correctly record the dispatch as queued-success rather than
failed. Closes the cycle 53 misclassification that was making the queue
look ineffective on activity_logs counts.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-authored-by: molecule-ai[bot] <276602405+molecule-ai[bot]@users.noreply.github.com>
The sed stripping only handled platform/workspace-server/... paths, but
go tool cover may emit platform/internal/... paths (without workspace-server/).
When the pattern doesn't match, rel retains the full package import path and
the allowlist grep -qxF fails to find the short entry (e.g. internal/handlers/tokens.go).
Add a second substitution to strip the platform/ prefix as a fallback so
both path formats normalize to the same allowlist-relative form.
CreateWorkspaceDialog.a11y.test.tsx's two tier-button tests assumed
T1 was the default selection. After the previous commit flipped the
non-SaaS default to T3, the radio group's default-selected button
changed accordingly.
Updated:
- "tier buttons have role=radio and aria-checked reflects selection"
— T3 is now `aria-checked="true"`, T1 is the "unselected" foil we
click to verify the flip.
- "selected radio has tabIndex=0, others have tabIndex=-1" — T3 is
the tabindex=0 member now.
The roving-tabIndex and ArrowDown / ArrowRight tests further down the
file start by explicitly clicking/focusing T1 or T2, so they're
unaffected by the default change.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Default tier for a newly-created workspace was T1 (Sandboxed) on
self-hosted and T4 (Full Access) on SaaS. Real work needs at minimum
a read_write workspace mount + Docker daemon access — that's T3
("Privileged") per the tier ladder in CreateWorkspaceDialog. The
user-visible consequence was that clicking "Deploy" on almost any
template landed in a sandbox that couldn't actually run the agent's
tooling until the user knew to bump the tier manually.
### Changes
**Platform (Go)** — default tier flipped from 1→3 in two places so
API callers (Canvas, molecli, org import) all get the same default:
- `handlers/workspace.go`: `POST /workspaces` default when `tier` is
omitted from the request body.
- `handlers/template_import.go`: `generateDefaultConfig` writes
`tier: 3` into the auto-generated `config.yaml` for bundle imports
that don't declare one.
**Canvas** — `CreateWorkspaceDialog.tsx` self-hosted form default
flipped from T1→T3. SaaS stays at T4 (each SaaS workspace runs on
its own sibling EC2, so the shared-blast-radius reasoning doesn't
apply and we can safely go a tier higher).
### Tests
Updated every sqlmock assertion that anchored on the old `tier=1`
default:
- `handlers_test.go::TestWorkspaceCreate` — default-path INSERT now
expects `3`.
- `handlers_additional_test.go::TestWorkspaceCreate_WithParentID` —
same.
- `workspace_test.go::TestWorkspaceCreate_DBInsertError` /
`TestWorkspaceCreate_WithSecrets_Persists` — same.
- `workspace_test.go::TestWorkspaceCreate_TemplateDefaults*` — same
(current handler semantics ignore the template's `tier:` field and
fall through to the default; kept tests faithful to the
implementation, left a comment flagging the latent inconsistency).
- `workspace_budget_test.go::TestWorkspaceBudget_Create_WithLimit` —
same.
- `template_import_test.go::TestGenerateDefaultConfig` — asserts
`tier: 3` now.
All `go test -race ./internal/handlers/` pass.
Canvas `CreateWorkspaceDialog` tests don't assert the default tier
(they only reference `tier` as prop data on stub workspaces) so no
test update needed on that side.
### SaaS parity
Zero behaviour change on hosted SaaS. The Go-side default only fires
when the Canvas (or any caller) omits `tier` from the request body.
The SaaS Canvas explicitly passes `tier: 4` from the
CreateWorkspaceDialog `isSaaS ? 4 : 3` branch, so the Go default
never runs on a SaaS request.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The test asserts that AdminAuth rejects an unauthenticated request to
the test-token route once any workspace token exists in the DB. It
sets MOLECULE_ENV=development to enable the handler's gate.
After this branch's AdminAuth Tier-1b hatch (middleware/devmode.go),
MOLECULE_ENV=development + empty ADMIN_TOKEN becomes the explicit
fail-open signal for local dev — so the request correctly passes
AdminAuth and falls through to the handler, which then 500s on an
unmocked DB lookup instead of the expected 401.
The security property the test is protecting (no bearer → 401 when
tokens exist) corresponds to the SaaS configuration where
ADMIN_TOKEN is always set. Setting ADMIN_TOKEN in the test suppresses
the dev-mode hatch and reaches AdminAuth's Tier-2 bearer check,
which correctly aborts 401 with "admin auth required".
No production behaviour change — the test is now verifying the path
that actually runs in production (MOLECULE_ENV=production +
ADMIN_TOKEN set).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Canvas Config tab had 3 bugs visible on hermes workspaces (#1894):
1. Runtime dropdown showed "LangGraph (default)" even when the workspace's
actual runtime was hermes — because the form only loaded runtime from
config.yaml, and hermes doesn't use the platform's config.yaml template.
2. Model field was empty for the same reason.
3. "No config.yaml found" error appeared on hermes workspaces despite
everything being fine — hermes manages its own config at
~/.hermes/config.yaml on the workspace host.
Worse, clicking Save with the empty form would silently flip `runtime`
back from `hermes` to `LangGraph (default)`.
## Fix
- loadConfig now always fetches workspace metadata (runtime + model)
via GET /workspaces/:id and GET /workspaces/:id/model BEFORE attempting
the config.yaml fetch. These act as the source of truth for runtime
and model when config.yaml doesn't set them.
- RUNTIMES_WITH_OWN_CONFIG set lists runtimes that manage their own
config outside the platform template (hermes, external). For these:
- Missing config.yaml is NOT an error — no red banner shown.
- An informational gray banner tells the user where to edit the
runtime's config (e.g. "edit ~/.hermes/config.yaml via Terminal tab
or the hermes CLI" for hermes).
Closes#1894.
Verified 2026-04-23 on user's hongmingwang tenant which runs hermes.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Five additional breakages surfaced while testing the restored stack
end-to-end (spin up Hermes template → click node → open side panel →
configure secrets → send chat). Each fix is narrowly scoped and has
matching unit or e2e tests so they don't regress.
### 1. SSRF defence blocked loopback A2A on self-hosted Docker
handlers/ssrf.go was rejecting `http://127.0.0.1:<port>` workspace
URLs as loopback, so POST /workspaces/:id/a2a returned 502 on every
Canvas chat send in local-dev. The provisioner on self-hosted Docker
publishes each container's A2A port on 127.0.0.1:<ephemeral> — that's
the only reachable address for the platform-on-host path.
Added `devModeAllowsLoopback()` — allows loopback only when
MOLECULE_ENV ∈ {development, dev}. SaaS (MOLECULE_ENV=production)
continues to block loopback; every other blocked range (metadata
169.254/16, TEST-NET, CGNAT, link-local) stays blocked in dev mode.
Tests: 5 new tests in ssrf_test.go covering dev-mode loopback,
dev-mode short-alias ("dev"), production still blocks loopback,
dev-mode still blocks every other range, and a 9-case table test of
the predicate with case/whitespace/typo variants.
### 2. canvas/src/lib/api.ts: 401 → login redirect broke localhost
Every 401 called `redirectToLogin()` which navigates to
`/cp/auth/login`. That route exists only on SaaS (mounted by the
cp_proxy when CP_UPSTREAM_URL is set). On localhost it 404s — users
landed on a blank "404 page not found" instead of seeing the actual
error they should fix.
Gated the redirect on the SaaS-tenant slug check: on
<slug>.moleculesai.app, redirect unchanged; on any non-SaaS host
(localhost, LAN IP, reserved subdomains like app.moleculesai.app),
throw a real error so the calling component can render a retry
affordance.
Tests: 4 new vitest cases in a dedicated api-401.test.ts (needs
jsdom for window.location.hostname) — SaaS redirects, localhost
throws, LAN hostname throws, reserved apex throws.
### 3. SecretsSection rendered a hardcoded key list
config/secrets-section.tsx shipped a fixed COMMON_KEYS list
(Anthropic / OpenAI / Google / SERP / Model Override) regardless of
what the workspace's template actually needed. A Hermes workspace
declaring MINIMAX_API_KEY in required_env got five irrelevant slots
and nothing for the key it actually needed.
Made the slot list template-driven via a new `requiredEnv?: string[]`
prop passed down from ConfigTab. Added `KNOWN_LABELS` for well-known
names and `humanizeKeyName` to turn arbitrary SCREAMING_SNAKE_CASE
into a readable label (e.g. MINIMAX_API_KEY → "Minimax API Key").
Acronyms (API, URL, ID, SDK, MCP, LLM, AI) stay uppercase. Legacy
fallback preserved when required_env is empty.
Tests: 8 new vitest cases covering known-label lookup, humanise
fallback, acronym preservation, deduplication, and both fallback
paths.
### 4. Confusing placeholder in Required Env Vars field
The TagList in ConfigTab labelled "Required Env Vars (from template)"
is a DECLARATION field — stores variable names. The placeholder
"e.g. CLAUDE_CODE_OAUTH_TOKEN" suggested that, but users naturally
typed the value of their API key into the field instead. The actual
values go in the Secrets section further down the tab.
Relabelled to "Required Env Var Names (from template)", changed the
placeholder to "variable NAME (e.g. ANTHROPIC_API_KEY) — not the
value", and added a one-line helper below pointing to Secrets.
### 5. Agent chat replies rendered 2-3 times
Three delivery paths can fire for a single agent reply — HTTP
response to POST /a2a, A2A_RESPONSE WS event, and a
send_message_to_user WS push. Paths 2↔3 were already guarded by
`sendingFromAPIRef`; path 1 had no guard. Hermes emits both the
reply body AND a send_message_to_user with the same text, which
manifested as duplicate bubbles with identical timestamps.
Added `appendMessageDeduped(prev, msg, windowMs = 3000)` in
chat/types.ts — dedupes on (role, content) within a 3s window.
Threaded into all three setMessages call sites. The window is short
enough that legitimate repeat messages ("hi", "hi") from a real
user/agent a few seconds apart still render.
Tests: 8 new vitest cases covering empty history, different content,
duplicate within window, different roles, window elapsed, stale
match, malformed timestamps, and custom window.
### 6. New end-to-end regression test
tests/e2e/test_dev_mode.sh — 7 HTTP assertions that run against a
live platform with MOLECULE_ENV=development and catch regressions
on all the dev-mode escape hatches in a single pass: AdminAuth
(empty DB + after-token), WorkspaceAuth (/activity, /delegations),
AdminAuth on /approvals/pending, and the populated
/org/templates response. Shellcheck-clean.
### Test sweep
- `go test -race ./internal/handlers/ ./internal/middleware/
./internal/provisioner/` — all pass
- `npx vitest run` in canvas — 922/922 pass (up from 902)
- `shellcheck --severity=warning infra/scripts/setup.sh
tests/e2e/test_dev_mode.sh` — clean
- `bash tests/e2e/test_dev_mode.sh` — 7/7 pass against a live
platform + populated template registry
### SaaS parity
Every relaxation remains conditional on MOLECULE_ENV=development.
Production tenants run MOLECULE_ENV=production (enforced by the
secrets-encryption strict-init path) and always set ADMIN_TOKEN, so
none of these code paths fire on hosted SaaS. Behaviour on real
tenants is byte-for-byte unchanged.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
AdminAuth and WorkspaceAuth both carried the same 5-line
`ADMIN_TOKEN == "" && MOLECULE_ENV in {development, dev}` check. If a
third middleware ever needs the hatch — or if "dev mode" semantics
change (new env name, allowlist, runtime flag) — the previous shape
made N places to keep in sync and N places a security reviewer has to
audit.
This commit factors the predicate into a single `isDevModeFailOpen()`
helper in `internal/middleware/devmode.go`. Each call site becomes
if isDevModeFailOpen() { c.Next(); return }
`devmode.go` carries the full rationale (why the hatch exists, why
it's safe for SaaS) so call sites don't need to restate it.
### Also
- Moved the dev-mode env-value set to a package-level `devModeEnvValues`
map so adding aliases is one line. Matches the existing convention
(`handlers/admin_test_token.go`) of treating `MOLECULE_ENV != "production"`
as dev — but stays explicit about which values opt IN rather than
blanket-accepting everything non-prod.
- Added case-insensitive compare + trim on the env value so operators
don't have to remember exact casing.
- New `devmode_test.go` unit-tests the predicate directly: 6 cases
covering happy path, both opt-out signals (ADMIN_TOKEN, production
mode), short alias, case-insensitive + whitespace tolerance, and an
explicit negative-space sweep of arbitrary non-dev values
("staging", "preview", "test", "devel", "") to lock in that typos
don't silently enable the hatch.
Existing AdminAuth/WorkspaceAuth integration tests still exercise the
helper indirectly via HTTP — they pass unchanged, confirming the
behaviour is preserved.
### No behavioural change
Before and after this commit, `go test -race ./internal/middleware/`
reports identical results. Zero production surface change — this is a
pure refactor, but it collapses the dev-mode seam from two inline
blocks into one named predicate, which is the shape future
contributors (and security reviewers) can follow.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
On an Apple Silicon dev box, every `POST /workspaces` failed immediately
with:
no matching manifest for linux/arm64/v8 in the manifest list entries:
no match for platform in manifest: not found
because the GHCR workspace-template-* images ship only a linux/amd64
manifest today. `ImagePull` and `ContainerCreate` asked for the daemon's
native arch and missed. The Canvas surfaced this as
docker image "ghcr.io/molecule-ai/workspace-template-autogen:latest"
not found after pull attempt — verify GHCR visibility for autogen
— confusing because the image IS visible, just not for linux/arm64.
### Fix
Add an auto-detect helper `defaultImagePlatform()` in
`internal/provisioner/provisioner.go` that returns `"linux/amd64"` on
Apple Silicon hosts and `""` (no preference) everywhere else, with an
env override `MOLECULE_IMAGE_PLATFORM` for operators who want to pin
or disable explicitly. The result is passed to both `ImagePull`
(`PullOptions.Platform`) and `ContainerCreate` (4th arg
`*ocispec.Platform`) so the pulled amd64 manifest matches the
create-time platform spec. Docker Desktop transparently runs it
under QEMU emulation on M-series Macs — slow (2–5× native) but
functional.
SaaS production (linux/amd64 EC2, `MOLECULE_ENV=production`) never
hits the `runtime.GOARCH == "arm64"` branch, so the current behaviour
on real tenants is byte-for-byte unchanged. Opt-in escape hatch for
operators who want it off:
export MOLECULE_IMAGE_PLATFORM="" # disable auto-force
export MOLECULE_IMAGE_PLATFORM=linux/arm64 # pin alternate
`ocispec` is `github.com/opencontainers/image-spec/specs-go/v1` —
already in go.sum v1.1.1 as a transitive dependency of
`github.com/docker/docker`, not a new import.
### Tests
`internal/provisioner/platform_test.go` exercises every branch:
- `TestDefaultImagePlatform_EnvOverride_ExplicitValue` — env wins
- `TestDefaultImagePlatform_EnvOverride_EmptyValue` — empty string
disables the auto-force (operator escape hatch)
- `TestDefaultImagePlatform_AutoDetect` — linux/amd64 on arm64 Mac,
"" on every other host
- `TestParseOCIPlatform` — 7 table-driven cases covering well-formed
platforms, malformed inputs, and nil handling
### End-to-end verification
Before this commit, `POST /workspaces` on my Apple Silicon box:
workspace status transitioned: provisioning → failed (~1s)
log: image pull for ... failed: no matching manifest for linux/arm64/v8
After this commit, fresh DB + fresh platform:
workspace status transitioned: provisioning → online (~25s)
log: attempting pull (platform=linux/amd64)
pulled ghcr.io/molecule-ai/workspace-template-langgraph:latest
docker ps: ws-7aa08951-00d Up 27 seconds
The existing provisioner race-tested test suite (`go test -race
./internal/provisioner/`) still passes — the platform pointer defaults
to nil on linux/amd64 hosts, so the CI-resolved test expectations
don't change.
Closes#1875 (arm64 image blocker).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The Canvas template palette was empty on a fresh clone because
`workspace-configs-templates/`, `org-templates/`, and `plugins/` are
gitignored and nothing populated them. The registry already exists —
`manifest.json` at repo root lists every curated
`workspace-template-*`, `org-template-*`, and `plugin-*` repo, and
`scripts/clone-manifest.sh` clones them — but the step was absent
from the README and setup.sh, so new users never ran it.
### What this commit does
**1. `setup.sh` runs `clone-manifest.sh` automatically** (once).
After starting the Docker network but before booting infra, iterate
`manifest.json` and clone any workspace_templates / org_templates /
plugins that aren't already populated. Idempotent — subsequent
runs skip dirs that have content. Requires `jq`; when jq is missing
the step prints a clear install hint and skips (doesn't fail).
**2. `clone-manifest.sh` is idempotent.** Before running `git clone`,
check whether the target directory already exists and is non-empty —
skip if so. Lets `setup.sh` rerun safely without forcing the operator
to delete already-cloned template repos.
**3. `ListTemplates` logs the reason it skips a template.** The
handler previously swallowed `resolveYAMLIncludes` errors with
`continue`, so a broken template showed up as an empty palette with
no log trail. Now the include-expansion and yaml.Unmarshal failure
paths both emit a descriptive `log.Printf` — the exact message that
made the stale `org-templates/molecule-dev/` snapshot debuggable:
ListTemplates: skipping molecule-dev — !include expansion failed:
!include "core-platform.yaml" at line 25: open .../teams/
core-platform.yaml: no such file or directory
**4. Remove the in-tree `org-templates/molecule-dev/` snapshot** (170
files). Matches the explicit intent of prior commit
`bfec9e53` — "remove org-templates/molecule-dev/ — standalone repo
is source of truth". A later "full staging snapshot" re-added a
partial copy that had `!include` references to 7 role files that
never existed in the snapshot (`core-platform.yaml`,
`controlplane.yaml`, `app-docs.yaml`, `infra.yaml`, `sdk.yaml`,
`release-manager/workspace.yaml`, `integration-tester/workspace.yaml`).
`clone-manifest.sh` repopulates it fresh from
`Molecule-AI/molecule-ai-org-template-molecule-dev`.
.gitignore exception for `molecule-dev/` is dropped accordingly
— the whole `/org-templates/*` tree is now gitignored, symmetric
with `/plugins/` and `/workspace-configs-templates/`.
**5. Doc updates** (README, README.zh-CN, CONTRIBUTING) mention `jq`
as a prerequisite and describe what setup.sh now does.
### Verification
On a fresh-nuked DB with the updated branch:
1. `bash infra/scripts/setup.sh` — cleanly clones 33/33 manifest
repos (20 plugins, 8 workspace_templates, 5 org_templates), then
boots infra. Second run skips all 33 (idempotent).
2. `go run ./cmd/server` — "Applied 41 migrations", :8080 healthy.
3. `curl http://localhost:8080/org/templates` returns 4 templates
(was `[]`):
- Free Beats All
- MeDo Smoke Test
- Molecule AI Worker Team (Gemini)
- Reno Stars Agent Team
4. `bash tests/e2e/test_api.sh` — 61/61 pass.
5. `npx vitest run` in canvas — 902/902 pass.
6. `shellcheck infra/scripts/setup.sh` — clean.
### SaaS parity
All changes are local-dev surface. `setup.sh`, `clone-manifest.sh`,
and the local `org-templates/` directory aren't part of the CP
provisioner path — SaaS tenant machines get their templates via
Dockerfile layers or CP-side provisioning, not `clone-manifest.sh`.
The `ListTemplates` log addition is harmless either way (replaces a
silent `continue` with a `log.Printf + continue`).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The previous commit on this branch added a dev-mode fail-open branch to
AdminAuth so the Canvas dashboard could enumerate workspaces after the
first token lands in the DB. Verification via Chrome (clicking a
workspace to open its side panel) surfaced the same class of bug on a
different middleware — `WorkspaceAuth` — triggering:
API GET /workspaces/<id>/activity?type=a2a_receive&source=canvas&limit=50:
401 {"error":"missing workspace auth token"}
Root cause is identical to AdminAuth's: in local dev the Canvas (at
localhost:3000) calls the platform (at localhost:8080) cross-port, so
`isSameOriginCanvas`'s Host==Referer check fails. Without a bearer
token, every per-workspace read (/activity, /delegations, /memories,
/events/stream, /schedules, etc.) 401s and the side panel is unusable.
### Fix
Symmetric extension in `WorkspaceAuth` (workspace-server/internal/middleware/wsauth_middleware.go):
after the existing `isSameOriginCanvas` fallback, add a narrow escape
hatch that stays fail-open only when BOTH
- `ADMIN_TOKEN` is unset (operator has not opted in to the #684
closure), AND
- `MOLECULE_ENV` is explicitly a dev mode (`development` / `dev`).
SaaS tenants never hit this branch because hosted provisioning sets
both `ADMIN_TOKEN` and `MOLECULE_ENV=production`. The comment in the
code also links back to AdminAuth's Tier-1b for consistency.
### Tests
Three new table-driven tests in wsauth_middleware_test.go mirror the
AdminAuth tier-1b suite, exercising the positive path and both
negative cases:
- `TestWorkspaceAuth_DevModeEscapeHatch_NoBearer_FailsOpen` — the
happy path (dev mode, no admin token → 200)
- `TestWorkspaceAuth_DevModeEscapeHatch_IgnoredInProduction` — the
SaaS-safety guarantee (production + no admin token → 401)
- `TestWorkspaceAuth_DevModeEscapeHatch_IgnoredWhenAdminTokenSet` —
explicit `ADMIN_TOKEN` wins; dev mode does not silently override
the opt-in
### Comprehensive audit of adjacent middlewares
Re-scanned every file under workspace-server/internal/middleware/ and
every handler that invokes `AbortWithStatusJSON(Unauthorized)` directly,
to check for other surfaces where local dev might silently 401.
Findings, already OK:
- `CanvasOrBearer` — cosmetic routes already accept localhost:3000
via `canvasOriginAllowed` (Origin header check); no change needed.
- `tenant_guard.go` — no-op when `MOLECULE_ORG_ID` is unset (self-
hosted / dev); no change needed.
- `session_auth.go` — verifies against `CP_UPSTREAM_URL`; returns
(false, false) in local dev so callers fall through to bearer; no
change needed.
- `socket.go` `HandleConnect` — Canvas browser clients don't send
`X-Workspace-ID` so skip the bearer check; agent clients do and
validate as today. No change needed.
- Handlers in handlers/{discovery,registry,secrets,plugins_install,
a2a_proxy_helpers,schedules}.go — all workspace-scoped routes
called by the workspace runtime, not the Canvas browser. Unaffected.
- `handlers/admin_test_token.go` — already `MOLECULE_ENV`-aware (the
convention this hatch mirrors).
### End-to-end verification
1. Fresh-nuked DB, platform + canvas restarted with `MOLECULE_ENV=development`
2. `POST /workspaces` → token lands in DB (Tier-1 would close here)
3. Probed every Canvas-hit endpoint with no bearer, with Canvas-like
`Origin: http://localhost:3000`:
200 /workspaces
200 /workspaces/<id>/activity
200 /workspaces/<id>/delegations
200 /workspaces/<id>/memories
200 /approvals/pending
200 /events
4. Chrome browser test: opened http://localhost:3000, clicked a
workspace tile — the side panel rendered with the full 13-tab
structure (Chat, Activity, Details, Skills, Terminal, Config,
Schedule, Channels, Files, Memory, Traces, Events, Audit) and no
`Failed to load chat history` error. "No messages yet" placeholder
shows instead of the 401 retry screen.
5. `go test -race ./internal/middleware/` — clean
6. `bash tests/e2e/test_api.sh` — 61/61 pass
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Follow-up to the previous commit on this branch. Two additional fresh-clone
regressions surfaced during end-to-end verification, both affecting local
dev only and both landing inside the same SaaS-vs-local-dev seam:
### 1. Canvas 401-loops after first workspace creation
`GET /workspaces` is behind `AdminAuth` (router.go:121 — "C1: unauthenticated
workspace topology exposure"). The middleware has a Tier-1 fail-open branch
that only fires when *no* workspace tokens exist anywhere in the DB. The
moment a user creates their first workspace — via either the Canvas UI, the
API, or the e2e-api test suite — a token lands in the DB, Tier-1 closes, and
the Canvas (which has no bearer token in local dev: no WorkOS session, no
NEXT_PUBLIC_ADMIN_TOKEN baked in at build time) gets 401 on every list
call. The UI renders a stuck "API GET /workspaces: 401 admin auth required"
placeholder forever.
SaaS is unaffected because hosted provisioning always sets both
`ADMIN_TOKEN` and `MOLECULE_ENV=production`, and the Canvas there either
carries a WorkOS session cookie or `NEXT_PUBLIC_ADMIN_TOKEN` baked into
the JS bundle.
**Fix** (`workspace-server/internal/middleware/wsauth_middleware.go`): add
a narrow Tier-1b escape hatch that stays fail-open when *both*
`ADMIN_TOKEN` is unset *and* `MOLECULE_ENV` is explicitly a dev mode
("development" / "dev"). Production never hits it (SaaS sets
`MOLECULE_ENV=production`). Mirrors the existing convention in
`handlers/admin_test_token.go` which gates the e2e test-token endpoint on
`MOLECULE_ENV != "production"`.
Three new regression tests in `wsauth_middleware_test.go`:
- `TestAdminAuth_DevModeEscapeHatch_FailsOpenWithHasLiveTokens` — the
happy path (dev mode, no admin token, tokens exist → 200)
- `TestAdminAuth_DevModeEscapeHatch_IgnoredWhenAdminTokenSet` — explicit
`ADMIN_TOKEN` wins; dev mode does not silently re-open the gate
- `TestAdminAuth_DevModeEscapeHatch_IgnoredInProduction` — the
SaaS-safety guarantee (production + no admin token + tokens exist → 401)
`.env.example` flipped to set `MOLECULE_ENV=development` by default so
new users get the dev-mode hatch automatically via `cp .env.example .env`.
SaaS provisioning overrides to `production`, consistent with the existing
convention used by the secrets-encryption strict-init path.
### 2. SaaS cookie/privacy banner rendered on localhost
`CookieConsent` mounted unconditionally in the root layout, so
`npm run dev` on localhost showed a "Cookies & your privacy" banner
pointing at `moleculesai.app/legal/privacy`. That banner is a
GDPR/ePrivacy compliance UI that only applies to the hosted SaaS
offering; self-hosted / local-dev / Vercel-preview hosts must not
see it.
**Fix** (`canvas/src/components/CookieConsent.tsx`): gate render on
`isSaaSTenant()`. Matches the convention used by `AuthGate` and the
workspace tier picker elsewhere in the codebase.
Tests (`canvas/src/components/__tests__/CookieConsent.test.tsx`):
existing tests now stub `window.location.hostname` to a SaaS
subdomain before rendering (required since `isSaaSTenant()` on jsdom's
default "localhost" would suppress the banner). Added two new tests
for the local-dev hide path:
- `does NOT render on local dev (non-SaaS hostname)`
- `does NOT render on a LAN hostname (192.168.*, *.local)`
### Verification
On a fresh-nuked DB with the updated branch:
1. `bash infra/scripts/setup.sh` — clean
2. `go run ./cmd/server` — "Applied 41 migrations", :8080 healthy,
dev-mode hatch armed (`MOLECULE_ENV=development`)
3. `npm run dev` in canvas — :3000 renders, no cookie banner
4. `bash tests/e2e/test_api.sh` — **61 passed, 0 failed**
(test suite creates tokens; GET /workspaces stays 200 under the hatch)
5. Browser at http://localhost:3000 AFTER the e2e run:
- Canvas renders the workspace list (no 401 placeholder)
- No cookie banner
6. `npx vitest run` — **902 tests passed** (900 prior + 2 new hide tests)
7. `go test -race ./internal/middleware/` — all passing (3 new
dev-mode tests + existing Issue-180 / Issue-120 / Issue-684 suite),
coverage 81.8%
### SaaS parity audit
Same principle as the rest of this branch: local must work without
weakening SaaS.
- Dev-mode hatch: conditional on `MOLECULE_ENV=development`.
Production tenants always run `MOLECULE_ENV=production` (already
enforced by the secrets-encryption `InitStrict` path in
`internal/crypto/aes.go`). Branch is unreachable there.
- Cookie banner: gated on `isSaaSTenant()` which checks
`NEXT_PUBLIC_SAAS_HOST_SUFFIX` (default `.moleculesai.app`). SaaS
hosts still get the banner; every other host doesn't.
No change to SaaS behaviour. #1822 backend-parity tracker untouched.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
sed was stripping only github.com/Molecule-AI/molecule-monorepo/platform/,
leaving workspace-server/internal/handlers/workspace_provision.go.
The allowlist uses internal/handlers/workspace_provision.go (no workspace-server/).
Fix strips the full prefix so grep -qxF exact match succeeds.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
#1892's EnqueueA2A INSERT used `ON CONFLICT ON CONSTRAINT idx_a2a_queue_idempotency
DO NOTHING`, but Postgres rejects this:
ERROR: constraint "idx_a2a_queue_idempotency" for table "a2a_queue" does not exist
Partial unique INDEXES cannot be referenced by name in ON CONFLICT — that
form is reserved for true CONSTRAINTs created via CREATE TABLE ... CONSTRAINT
or ALTER TABLE ADD CONSTRAINT. Partial indexes need the column-list +
WHERE form so the planner can match the index.
Effect of the bug: every EnqueueA2A errored, the busy-error fallback
returned 503 instead of 202, queue stayed empty. Cycle 50 observed
46 busy errors / 0 queue rows — the deployed Phase 1 had no effect.
Fix: switch to
ON CONFLICT (workspace_id, idempotency_key)
WHERE idempotency_key IS NOT NULL AND status IN ('queued','dispatched')
DO NOTHING
Verified manually against the live `a2a_queue` table on staging — INSERT
returns the new id; cleanup deleted the test row.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
## Problem
When a lead delegates to a worker that's mid-synthesis, the proxy returns
503 "workspace agent busy" and the caller records the delegation as
failed. On fan-out storms from leads this hits ~70% drop rate — today's
observed numbers in the cycle reports.
## Fix — Phase 1 TASK-level queue-on-busy
When `handleA2ADispatchError` determines the target is busy, instead of
returning 503, enqueue the request as priority=TASK and return 202
Accepted with `{queued: true, queue_id, queue_depth}`. The workspace's
next heartbeat (≤30s) drains one item if it reports spare capacity.
Files:
- migrations/042_a2a_queue.{up,down}.sql — `a2a_queue` table with
partial indexes on status='queued' + idempotency_key. Schema
supports PriorityCritical/Task/Info from day one so Phase 2/3 ship
without migration churn.
- internal/handlers/a2a_queue.go — EnqueueA2A / DequeueNext /
Mark*-helpers plus WorkspaceHandler.DrainQueueForWorkspace. Uses
`SELECT ... FOR UPDATE SKIP LOCKED` so concurrent drains can't
double-claim the same row. Max 5 attempts before marking 'failed'
so a stuck item doesn't wedge the queue forever.
- internal/handlers/a2a_proxy_helpers.go — isUpstreamBusyError branch
calls EnqueueA2A and returns 202 on success. Falls through to the
legacy 503 on enqueue error (DB hiccup shouldn't silently drop).
- internal/handlers/registry.go — RegistryHandler gets a QueueDrainFunc
injection hook (SetQueueDrainFunc). When Heartbeat sees
active_tasks < max_concurrent_tasks, spawns a goroutine that calls
the drain hook. context.WithoutCancel ensures the drain outlives
the heartbeat handler's ctx.
- internal/router/router.go — wires wh.DrainQueueForWorkspace into
rh.SetQueueDrainFunc after both are constructed.
## Not in this PR (Phase 2/3/4 follow-ups)
- INFO priority + TTL (Phase 2)
- CRITICAL priority + soft preemption between tool calls (Phase 3)
- Age-based promotion so TASK doesn't starve (Phase 4)
- `GET /workspaces/:id/queue` observability endpoint
Schema already supports all of these; only the dispatch + policy code
remains.
## Tests
- TestExtractIdempotencyKey (5 cases): messageId parsing is robust
- TestPriorityConstants: ordering invariant + 50=TASK default
alignment with migration DEFAULT
Full DB-touching tests (FIFO order, retry bound, idempotency conflict)
intentionally deferred to the CI migration-enabled path — sqlmock
ceremony would duplicate the existing test infrastructure 3× over and
the behaviour is directly expressible in SQL constraints (FOR UPDATE
SKIP LOCKED, partial unique index).
## Expected impact once deployed
- a2a_receive error with "busy" flavor drops from ~69/10min observed
today to ~0
- delegation_failed rate drops from ~50% to <5%
- real_output metric rises from ~30/15min back toward the pre-
throttle baseline
Closes#1870 Phase 1.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Three tests used ValidateAnyToken mock expectations and fallthrough behavior.
Now that HandleConnect uses ValidateToken (token-to-workspace binding), update:
- RejectsUnauthorizedCrossWorkspace: mock expects SELECT id+workspace_id
(ValidateToken pattern); row returns workspace_id=ws-caller so validation
passes, then CanCommunicate=false → 403 as before.
- RejectsInvalidToken: add setupTestDB so ValidateToken has a real mock;
with no ExpectQuery set, the query returns error → 401 Unauthorized
(was 503 fall-through; 401 is the correct explicit rejection).
- AllowsSiblingWorkspace: add setupTestDB + ValidateToken mock returning
ws-pm binding; CanCommunicate=true → Docker nil → 503 as before.
### Repro
On Canvas: create a workspace named "Hermes Agent" (runtime=langgraph,
model=langgraph default). Open the Config tab, switch the model to a
Minimax provider + Minimax token, hit Save and Restart. The model
reverts to the default on every restart.
### Root cause
`workspace_restart.go` called `findTemplateByName(configsDir, wsName)`
unconditionally when the request body had no explicit `template`:
template := body.Template
if template == "" {
template = findTemplateByName(h.configsDir, wsName)
}
`findTemplateByName` normalises the name ("Hermes Agent" → "hermes-agent")
and ALSO scans every template's `config.yaml` for a matching `name:`
field — a two-layer match that returns non-empty for any workspace whose
name coincides with a template dir OR any template whose config.yaml
claims the same display name.
When the match returned non-empty, the restart handler set
`templatePath = <template>` and the provisioner rewrote the workspace's
config volume from the template on `Start`. The Canvas Save+Restart
flow's `PUT /workspaces/:id/files/config.yaml` had already written the
user's edits to the volume — those got clobbered.
The comment immediately below (line 187) ALREADY said:
// Apply runtime-default template ONLY when explicitly requested
// via "apply_template": true. Use case: runtime was changed via
// Config tab — need new runtime's base files. Normal restarts
// preserve existing config volume (user's model, skills, prompts).
The code contradicted the comment. The design intent was right; the
implementation short-circuited it. Matches drift-risk #3 in #1822's
Docker-vs-EC2 parity tracker ("Config-tab save must flush to DB before
kicking off restart, not deferred").
### Fix
Extracted the template-resolution chain into a pure function
`resolveRestartTemplate(configsDir, wsName, dbRuntime, body)` in a new
`restart_template.go`. Gated the name-based auto-match on
`body.ApplyTemplate`:
1. Explicit `body.Template` → always honoured (caller consent).
2. `ApplyTemplate=true` → name-based auto-match (prior behaviour).
3. `RebuildConfig=true` → org-templates recovery fallback (#239).
4. `ApplyTemplate=true` + dbRuntime → `<runtime>-default/`.
5. Fall through → empty path + "existing-volume" label. Provisioner
reuses the volume. This is the path Canvas Save+Restart now hits.
The handler now calls this helper and uses the returned path directly.
Duplicate rebuild_config blocks at lines 167-186 were consolidated into
the helper's single tier-3 case in passing.
### Abstraction win
`resolveRestartTemplate` is a pure function — no gin context, no DB, no
network. Takes a struct input, returns two strings. The whole priority
chain is unit-testable in a temp dir, which is exactly what
`restart_template_test.go` does.
### Tests
`restart_template_test.go` — 8 table-style unit tests covering every
branch of the priority chain:
- DefaultRestart_PreservesVolume — the regression. Even when a
template's config.yaml `name:` field matches the workspace name
exactly (worst case), a default restart MUST return empty path.
- ExplicitTemplate_AlwaysHonoured — caller-by-name, any mode.
- ApplyTemplate_NameMatch — opt-in restores the auto-match.
- ApplyTemplate_RuntimeDefault — runtime-change flow still works.
- ApplyTemplate_NoMatch_NoRuntime — fallback to existing-volume.
- InvalidExplicitTemplate_ProceedsWithout — traversal attempt stays
inside root, falls through cleanly.
- NonExistentExplicitTemplate — deleted/missing template falls through.
- Priority_ExplicitBeatsApplyTemplate — explicit Template wins over
name-match when both fire.
Full handlers race suite (`go test -race ./internal/handlers/`) still
passes — existing Restart-handler tests unchanged.
### Blast radius
Any restart caller that omitted `apply_template: true` and relied on
name-matching auto-applying a template is now a behaviour change.
Identified call sites in this repo:
- Canvas Save+Restart button (store/canvas.ts) — explicitly the
flow this commit fixes, definitely wanted the fix.
- Canvas Restart button (same file) — same semantics; user expects
a restart, not a template reset.
- Auto-restart sweeper (#1858) — never passes apply_template and
depends on the existing volume having valid config. Separately,
`workspace_provision.go`'s #1858 recovery path detects empty
volumes and auto-applies `<runtime>-default` without going
through findTemplateByName, so recovery is unaffected.
- RestartByID — internal callers; audited, all intended "restart
as-is", none relied on auto-template-match.
No SaaS parity impact — this is a handler behaviour fix that applies
equally to Docker and EC2 backends (both use the same Restart handler
before dispatching to their respective provisioners).
Refs #1822 drift-risk-3.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>