Closes Molecule-AI/molecule-controlplane#239.
CF zone hit the 200-record quota 2026-04-23+ — every E2E and canary
left a record on moleculesai.app, and no scheduled job pruned them.
Provisions started failing with code 81045 ('Record quota exceeded').
The sweep-cf-orphans.sh script (PR #1978, with decision-function
unit tests added in #2079) already exists but no workflow fires it.
Adding it here as a parallel janitor to sweep-stale-e2e-orgs.yml:
- hourly schedule at :15 (offset from the e2e-orgs sweep at :00 so
the two converge cleanly without racing the same CP admin endpoint)
- workflow_dispatch with dry_run input default true (ad-hoc verify
without committing to deletes)
- workflow_dispatch with max_delete_pct input for major cleanups
(the script's own MAX_DELETE_PCT defaults to 50% as a safety gate)
- concurrency group prevents schedule + manual-dispatch from racing
the same zone
Why a separate workflow vs sweep-stale-e2e-orgs.yml:
- That workflow drives DELETE /cp/admin/tenants/:slug, assumes CP
has the org row. Doesn't catch records left when CP itself never
knew about the tenant (canary scratch, manual ops experiments)
or when the CP-side cascade's CF-delete branch failed.
- sweep-cf-orphans.sh enumerates the CF zone directly + matches
against live CP slugs + AWS EC2 names. Catches what the CP-driven
sweep can't.
Required secrets (will need to be set on the repo): CF_API_TOKEN,
CF_ZONE_ID, CP_PROD_ADMIN_TOKEN, CP_STAGING_ADMIN_TOKEN,
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY. Pre-flight verify-secrets
step fails loud if any are missing.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The existing tokens_test.go skips every test when db.DB is nil, so CI
ran with 0% coverage on tokens.go's List/Create/Revoke. This file adds
sqlmock-driven tests that exercise the SQL paths directly without
needing a live Postgres, lifting coverage on all 4 functions to 100%
and module-level handler coverage from 60.3% → 61.1%.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Both backends panicked when called on a zero-valued or nil receiver:
Provisioner.{Stop,IsRunning} dereferenced p.cli; CPProvisioner.{Stop,
IsRunning} dereferenced p.httpClient. The orphan sweeper and shutdown
paths can call these speculatively where the receiver isn't fully
wired — the panic crashed the goroutine instead of the caller seeing
a clean error.
Three changes:
1. Add ErrNoBackend (typed sentinel) and nil-guard the four methods.
- Provisioner.{Stop,IsRunning}: guard p == nil || p.cli == nil at
the top.
- CPProvisioner.Stop: guard p == nil up top, then httpClient nil
AFTER resolveInstanceID + empty-instance check (the empty
instance_id path doesn't need HTTP and stays a no-op success
even on zero-valued receivers — preserved historical contract
from TestIsRunning_EmptyInstanceIDReturnsFalse).
- CPProvisioner.IsRunning: same shape — empty instance_id stays
(false, nil); httpClient-nil with non-empty instance_id returns
ErrNoBackend.
2. Flip the t.Skip on TestDockerBackend_Contract +
TestCPProvisionerBackend_Contract — both contract tests run now
that the panics are gone. Skipped scenarios were the regression
guard for this fix.
3. Add TestZeroValuedBackends_NoPanic — explicit assertion that
zero-valued and nil receivers return cleanly (no panic). Docker
backend always returns ErrNoBackend on zero-valued; CPProvisioner
may return (false, nil) when the DB-lookup layer absorbs the case
(no instance to query → no HTTP needed). Both are acceptable per
the issue's contract — the gate is no-panic.
Tests:
- 6 sub-cases across the new TestZeroValuedBackends_NoPanic
- TestDockerBackend_Contract + TestCPProvisionerBackend_Contract
now run their 2 scenarios (4 sub-cases each)
- All existing provisioner tests still green
- go build ./... + go vet ./... + go test ./... clean
Closes drift-risk #6 in docs/architecture/backends.md.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Collapses test_compliance_default_when_yaml_omits_block,
_when_yaml_block_is_empty, _explicit_optout_still_works into one
parametrized test_compliance_default_via_load_config with three
ids (yaml_omits_block, yaml_block_empty, yaml_explicit_optout).
The dataclass-default test stays separate (no tmp_path needed).
Coverage and assertions identical; net -19 lines, same 4 logical cases.
prompt_injection check moves out of per-case to a single tail-assert
since no payload overrode it.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
PR #2056 flipped ComplianceConfig.mode default from "" to "owasp_agentic"
so every shipped template gets prompt-injection detection + PII redaction
by default. The flip is correct + already shipping, but no test asserts
the new default — a silent revert (or a refactor that reintroduces the
old "" default) would pass workspace/tests/ and ship a workspace with
compliance silently off.
Add 4 regression tests:
- test_compliance_dataclass_default — ComplianceConfig() with no args
returns mode='owasp_agentic' + prompt_injection='detect'
- test_compliance_default_when_yaml_omits_block — load_config on a yaml
without `compliance:` key still produces owasp_agentic
- test_compliance_default_when_yaml_block_is_empty — load_config on
`compliance: {}` (a common shape during template editing) still
produces owasp_agentic; covers the load_config()
`.get("mode", "owasp_agentic")` default-fill path
- test_compliance_explicit_optout_still_works — `mode: ""` in yaml
must disable compliance (the documented opt-out path)
23/23 tests pass locally (4 new + 19 existing).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
platform_auth.py validates WORKSPACE_ID at module load — EC2 user-data
sets it from cloud-init, but the CI smoke-test was missing it and
failed with 'WORKSPACE_ID is empty'. Set a placeholder UUID so the
import gate exercises only the dep-resolution path.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Pre-fix: workspace-server's provision-timeout sweep was hardcoded
at 10 min for all runtimes. The CP-side bootstrap-watcher (cp#245)
correctly gives hermes 25 min for cold-boot (hermes installs
include apt + uv + Python venv + Node + hermes-agent — 13–25 min on
slow apt mirrors is normal). The two timeout systems disagreed:
the watcher would happily wait 25 min, but the workspace-server's
10-min sweep killed healthy hermes boots mid-install at 10 min and
marked them failed.
Today's example: #2061's E2E run on 2026-04-26 at 08:06:34Z
created a hermes workspace, EC2 cloud-init was visibly making
progress on apt-installs (libcjson1, libmbedcrypto7t64) when the
sweep flipped status to 'failed' at 08:17:00Z (10:26 elapsed). The
test threw "Workspace failed: " (empty error from sql.NullString
serialization) and CI failed on a healthy boot.
Fix: provisioningTimeoutFor(runtime) — same shape as the CP's
bootstrapTimeoutFn:
- hermes: 30 min (watcher's 25 min + 5 min slack)
- others: 10 min (unchanged — claude-code/langgraph/etc. boot
in <5 min, 10 min is plenty)
PROVISION_TIMEOUT_SECONDS env override still works (applies to all
runtimes — operators who care about the runtime distinction
shouldn't use the override anyway).
Sweep query change: pulls (id, runtime, age_sec) per row instead
of pre-filtering by age in SQL. Per-row Go evaluation picks the
correct timeout. Slightly more rows scanned but bounded by the
status='provisioning' partial index — workspaces in flight, not
historical.
Tests:
- TestProvisioningTimeout_RuntimeAware — locks in the per-runtime
mapping
- TestSweepStuckProvisioning_HermesGets30MinSlack — hermes at
11 min must NOT be flipped
- TestSweepStuckProvisioning_HermesPastDeadline — hermes at
31 min IS flipped, payload includes runtime
- Existing tests updated for the new query shape
Verified:
- go build ./... clean
- go vet ./... clean
- go test ./... all green
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Review of the runtime-pin-compat workflow:
- Add merge_group trigger so when this becomes a required check the
queue green-checks it (mirrors ci.yml convention).
- Cache pip on workspace/requirements.txt — actions/setup-python@v5
with cache: pip + cache-dependency-path. Saves ~30s per fire.
- Document the load-bearing install order: runtime FIRST so pip
honors the runtime's declared a2a-sdk constraint (the surface that
broke 2026-04-24); workspace/requirements.txt SECOND so a2a-sdk
is upgraded to the runtime image's pinned version. Import smoke
validates the upgraded combination.
Skipped: branch-protection wiring (separate ops decision, not in
scope here); ci.yml integration (the standalone schedule trigger
is the load-bearing reason to keep this workflow separate).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes Molecule-AI/molecule-controlplane#253.
Prevents recurrence of the 5-hour staging outage from 2026-04-24:
molecule-ai-workspace-runtime 0.1.13 declared `a2a-sdk<1.0` in its
metadata but actually imported `a2a.server.routes` (1.0+ only). pip
resolved successfully; every tenant workspace crashed at import. The
canary tenant ultimately caught it but only after 5 hours of degraded
staging. PR #249 fixed the version pin manually; nothing automated
catches the same class of bug for the next release.
This workflow:
- Installs molecule-ai-workspace-runtime fresh from PyPI in a Python
3.11 venv (mirrors EC2 user-data install pattern)
- Layers in workspace/requirements.txt (the runtime image's actual
dep set, including the a2a-sdk[http-server]>=1.0,<2.0 pin)
- Runs `from molecule_runtime.main import main_sync` — same import
the runtime entrypoint does
- Fails CI if pip resolution silently produced a combo that the
runtime can't actually import
Triggers:
- PR + push to main/staging touching workspace/requirements.txt or
this workflow (catches local pin changes)
- Daily 13:00 UTC schedule (catches upstream PyPI publishes that
break the pin combo without any change in our repo)
- workflow_dispatch (manual)
Concurrency cancels in-progress runs on the same ref.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
\`Delete\`'s call to \`h.provisioner.Stop()\` was silently swallowing
errors — and on the SaaS/EC2 backend, Stop() is the call that
terminates the EC2 via the control plane. When Stop returned an
error (CP transient 5xx, network blip), the workspace was marked
'removed' in the DB but the EC2 stayed running with no row to
track it. The "14 orphan workspace EC2s on a 0-customer account"
incident in #1843 (40 vCPU on a 64 vCPU AWS limit) traced to this
silent-leak path.
This change aggregates Stop errors across both descendant and
self-stop calls and surfaces them as 500 to the client, matching
the loud-fail pattern from CP #262 (DeprovisionInstance) and the
DNS cleanup propagation (#269).
Idempotency:
- The DB row is already 'removed' before Stop runs (intentional,
per #73 — guards against register/heartbeat resurrection).
- \`resolveInstanceID\` reads instance_id without a status filter,
so a retry can replay Stop with the same instance_id.
- CP's TerminateInstance is idempotent on already-terminated EC2s.
- So a retry-after-500 either re-attempts the terminate (succeeds)
or finds the instance already gone (also succeeds).
Behaviour change at the API layer:
- Before: 200 \`{"status":"removed","cascade_deleted":N}\` regardless
of Stop outcome.
- After: 500 \`{"error":"...","removed_count":N,"stop_failures":K}\`
on Stop failure; 200 on success.
RemoveVolume errors stay log-and-continue — those are local
/var/data cleanup, not infra-leak class.
Test debt acknowledged: the WorkspaceHandler's \`provisioner\` field
is the concrete \`*provisioner.Provisioner\` type, not an interface.
Adding a regression test for the new error-propagation path
requires either a refactor (introduce a Provisioner interface) or
a docker-backed integration test. Filing the refactor as a
follow-up; the change here is small and mirrors a proven pattern
(CP #262 + #269 both ship without exhaustive new test coverage
for the same reason).
Verified:
- go build ./... clean
- go vet ./... clean
- go test ./... green across the whole module (existing TestDelete
cases unchanged behaviour for happy path)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Code-quality + efficiency review of PR #2081:
- Drop comma-ok on map type-asserts in filterPeersByQuery —
queryPeerMaps writes name/role unconditionally as string, so the
silent-empty-string fallback was cargo-culted defense that would
HIDE a real upstream shape change in tests rather than surface it.
Plain p["name"].(string) panics on violation, caught by tests.
- Trim filterPeersByQuery doc from 5 lines to 1 — function is 15
lines and self-evident.
- Refactor 6 separate Test functions into one table-driven
TestPeers_QFilter with 6 sub-tests. Net ~80 lines saved + naming
becomes readable subtest names instead of TestPeers_Q_Foo_Bar.
- Set-based peer-id comparison (peerIDSet) replaces fragile
peers[0]["id"] == "ws-alpha" asserts that would silently mask a
future sort/order regression on the production code.
- Fix the broken TestPeers_Q_NoMatches assertion: re-encoding an
unmarshalled []map collapses both null and [] to [], so the
previous json.Marshal(peers) == "[]" check was tautological. Move
the [] vs null distinction to a dedicated test
(TestPeers_Q_NoMatches_RawBodyIsArrayNotNull) that inspects the
recorder body BEFORE unmarshal.
runPeersWithQuery now returns both parsed peers and raw body so the
nil-guard test can use the bytes directly.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The Peers handler at workspace-server/internal/handlers/discovery.go
ignored the ?q= query param entirely — every caller got the full peer
list regardless of what they searched for. The handler exposes peer
identities + URLs, so leaking the unfiltered set on a "filtered"
endpoint is an info-disclosure bug (CWE-862).
Fix: read c.Query("q") and post-filter the in-memory peers slice by
case-insensitive substring match against name OR role. Filtering is
done in Go after the existing 3 SQL reads — keeps the SQL bytes
identical to the no-filter path (no injection vector, no DB-driver
collation surprises) at a small cost. The peer set is bounded by a
single workspace's parent + children + siblings (typically <50
rows), so the in-memory pass is negligible.
Empty / whitespace-only q is a no-op — preserves the no-filter
allocation profile.
Tests (6 new in discovery_test.go):
- TestPeers_NoQ_ReturnsAll — regression baseline (3 peers, no filter)
- TestPeers_Q_FiltersByName — q=alpha → ws-alpha only
- TestPeers_Q_CaseInsensitive — q=ALPHA → ws-alpha (locks in ToLower)
- TestPeers_Q_FiltersByRole — q=design → ws-beta (role-side match)
- TestPeers_Q_NoMatches — empty array, JSON [] not null
- TestPeers_Q_WhitespaceOnly — q=' ' treated as no-filter
Helpers peersFilterFixture + runPeersWithQuery + peerNames keep each
test scoped to the q-behaviour, not re-declaring SQL expectations.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
When a bot opens a PR against main and there's already another PR on
the same head branch targeting staging, GitHub's PATCH /pulls returns
422 with:
"A pull request already exists for base branch 'staging' and
head branch '<branch>'"
Pre-fix: the retarget Action exited 1 with no further action. The
target-main PR sat there as a duplicate, the workflow run showed
red, and someone had to manually close the duplicate. Today's case
(#1881 duplicate of #1820) had to be closed manually.
Fix: catch that specific 422 message and close the main-PR as
redundant instead of failing. Any OTHER 422 (or other error) still
fails loud — the grep matches the specific duplicate-base text, not
a blanket "any 422 means duplicate".
Behaviour matrix:
PATCH succeeds → retargeted, explainer
comment posted
PATCH 422 "already exists for staging" → close main-PR with
explainer (NEW)
PATCH any other failure → workflow fails (preserves
loud-fail for real bugs)
Tests: GitHub Actions don't have an inline unit-test framework here.
The workflow YAML parses (validated locally) and the bash logic is
straightforward. Real verification will be the next duplicate-PR
scenario in production.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Code-quality + efficiency review of PR #2079:
- Hoist all_slugs = prod_slugs | staging_slugs out of decide() into the
caller (was rebuilt on every record — 1k records × ~50-slug union per
call). decide() signature now (r, all_slugs, ec2_names).
- Compile regexes at module scope (_WS_RE, _E2E_RE, _TENANT_RE) +
hoist platform-core literal set (_PLATFORM_CORE_NAMES). Same change
mirrored in the bash heredoc.
- Drop decorative # Rule N: comments (numbering was out of order, 3
before 2 — actively confusing).
- Move the "edits must mirror" reminder OUTSIDE the CANONICAL DECIDE
block in the .sh file, eliminating the .replace() comment-skip hack
in TestParityWithBashScript.
- Drop per-line .strip() in _slice_canonical (would mask a real
indentation bug; both blocks already at column 0).
- subTest() in TestPlatformCore loops so a single failure no longer
short-circuits the rest of the items.
- merge_group + concurrency on test-ops-scripts.yml (parity with
ci.yml gate behaviour).
- Fix don't apostrophe in inline comment that closed the python
heredoc's single-quote and broke bash -n.
All 25 tests still pass. bash -n clean.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes#2027.
The CF orphan sweep deletes DNS records — a misclassification could nuke
a live workspace's tunnel. The decision function had MAX_DELETE_PCT
percentage gating but no automated test of category → action mapping.
Approach: extract the decide() function to scripts/ops/sweep_cf_decide.py
as a verbatim copy bracketed by `# CANONICAL DECIDE BEGIN/END` markers.
The shell script keeps its inline heredoc (so the operational path is
untouched) but bracketed by the same markers. A parity test
(TestParityWithBashScript) reads both files and asserts the bracketed
blocks match line-for-line — drift fails CI loudly.
Coverage (25 tests, 1 file, stdlib unittest only):
- Rule 1 platform-core: apex, _vercel, _domainkey, www/api/app/doc/send/status/staging-api
- Rule 3 ws-*: live (matches EC2 prefix) on prod + staging; orphan on prod + staging
- Rule 4 e2e-*: live + orphan on staging; orphan on prod
- Rule 2 generic tenant: live prod + staging; unknown subdomain kept-for-safety
- Rule 5 fallthrough: external domain + unrelated apex
- Rule priority: api.moleculesai.app stays platform-core (not tenant); _vercel stays verification
- Safety gate: under/at/over default 50% threshold; zero-total no-divide; custom threshold
- Empty live-sets: documents that decide() alone classifies as orphan, gate is the defense
CI: new .github/workflows/test-ops-scripts.yml runs `python -m unittest
discover` against scripts/ops/ on every PR/push that touches the
directory. Lightweight — no requirements file, stdlib only.
Local: `cd scripts/ops && python -m unittest test_sweep_cf_decide -v` →
25 tests, all OK.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The actual cause-fix for the staging-tabs E2E saga (#2073/#2074/#2075).
Old behaviour: ANY 401 from any fetch on a SaaS tenant subdomain
called redirectToLogin → window.location.href = AuthKit. This is
wrong. Plenty of 401s don't mean "session is dead":
- workspace-scoped endpoints (/workspaces/:id/peers, /plugins)
require a workspace-scoped token, not the tenant admin bearer
- resource-permission mismatches (user has tenant access but not
this specific workspace)
- misconfigured proxies returning 401 spuriously
A single transient one of those yanked authenticated users back to
AuthKit. Same bug yanked the staging-tabs E2E off the tenant origin
mid-test for 6+ hours tonight, leading to the cascade of test-side
mocks (#2073/#2074/#2075) that worked around the symptom without
fixing the cause.
This PR fixes it at the source. The new logic:
- 401 on /cp/auth/* path → that IS the canonical session-dead
signal → redirect (unchanged)
- 401 on any other path with slug present → probe /cp/auth/me:
probe 401 → session genuinely dead → redirect
probe 200 → session fine, endpoint refused this token →
throw a real Error, caller renders error state
probe network err → assume session-fine (conservative) →
throw real Error
- slug empty (localhost / LAN / reserved subdomain) → throw
without redirect (unchanged)
The probe adds one extra fetch on a 401, only when slug is set
and the path isn't already auth-scoped. That's rare and
worthwhile — a transient probe round-trip is cheap; an unwanted
auth redirect is a UX disaster.
Tests:
- api-401.test.ts rewritten with the full matrix:
* /cp/auth/me 401 → redirect (no probe, that IS the signal)
* non-auth 401 + probe 401 → redirect
* non-auth 401 + probe 200 → throw, no redirect ← the fix
* non-auth 401 + probe network err → throw, no redirect
* empty slug paths (localhost/LAN/reserved) → throw, no probe
- 43 tests in canvas/src/lib/__tests__/api*.test.ts all pass
- tsc clean
The staging-tabs E2E spec's universal-401 route handler stays as
defense-in-depth (silences resource-load console noise + guards
against panels without try/catch), but the comment now describes
its role honestly: api.ts is the primary fix, the route is the
safety net.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
After #2074, the staging-tabs spec stopped failing on the auth-redirect
locator timeout (good — the broadened 401-mock works) but started
failing on a different aggregate check:
Error: unexpected console errors:
Failed to load resource: the server responded with a status of 404
Failed to load resource: the server responded with a status of 404
Failed to load resource: the server responded with a status of 404
Browser console messages for resource-load failures omit the URL,
so the message is uninformative on its own — we can't filter
selectively (e.g. "is this a missing-CSS noise or a real broken
endpoint?"). The previous filter list (sentry/vercel/WebSocket/
favicon/molecule-icon) catches specific known-noisy strings but
this generic "Failed to load resource" doesn't contain any of them.
Two changes:
1. Add page.on('requestfailed') + page.on('response>=400') logging
to capture the URL of any failed request. Logs to test stdout
(visible in the workflow log) — leaves a breadcrumb so a real
bug isn't completely hidden when we filter the generic message.
2. Add "Failed to load resource" to the filter list. With (1) in
place we still see the URLs for diagnosis; the generic console
message is just noise.
Real JS exceptions (panel crash, undefined access, etc.) come with
a file path and stack trace and aren't matched by either filter,
so the gate still catches actual bugs.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
#2073 caught workspace-scoped 401s but missed non-workspace paths.
SkillsTab.tsx alone fetches /plugins and /plugins/sources, both
outside the /workspaces/<id>/* tree. Either of those 401s with the
tenant admin bearer in SaaS mode → canvas/src/lib/api.ts:62-74
redirects to AuthKit → page navigates away mid-test → next locator
times out.
Same failure signature observed at 16:03Z post-#2073 merge:
e2e/staging-tabs.spec.ts:45:7 › tab: skills
TimeoutError: locator.scrollIntoViewIfNeeded: Timeout 5000ms
- navigated to "https://scenic-pumpkin-83.authkit.app/?..."
Broaden the route to "**" with `request.resourceType() !== "fetch"`
short-circuit (preserves HTML/JS/CSS pass-through) and a
/cp/auth/me skip (the dedicated mock above wins). Same 401 →
empty-body conversion logic; just a wider net.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The staging-tabs E2E has been failing for 6+ hours on the same
locator timeout — diagnosed earlier today as the canvas's
lib/api.ts:62-74 redirect-on-401 path firing mid-test:
e2e/staging-tabs.spec.ts:45:7 › tab: skills
TimeoutError: locator.scrollIntoViewIfNeeded: Timeout 5000ms
- navigated to "https://scenic-pumpkin-83.authkit.app/?..."
Several side-panel tabs (Peers, Skills, Channels, Memory, Audit,
and anything workspace-scoped) hit endpoints under
`/workspaces/<id>/*` that require a workspace-scoped token, NOT
the tenant admin bearer the test uses. The endpoints respond 401
in SaaS mode. canvas/src/lib/api.ts:62-74 reacts to ANY 401 by
setting `window.location.href` to AuthKit — yanking the page off
the tenant origin mid-test.
The test comment at line 18 already acknowledged the 401 class
("Peers tab: 401 without workspace-scoped token") but assumed
those would surface as "errored content" rather than a hard
navigation. The redirect logic in api.ts was added later and
breaks the assumption.
Fix: add a Playwright route handler that catches any 401 from
`/workspaces/<id>/*` paths and replaces with `200 + empty body`.
Body shape is best-effort by URL — list endpoints (paths not
ending in a UUID-shaped segment) get `[]`, single-resource
endpoints get `{}`. Both are valid JSON and well-written panels
render an empty state for either rather than crashing.
The two route patterns (`/workspaces/...` and `/cp/auth/me`)
don't overlap — the existing `/cp/auth/me` mock continues to
gate AuthGate's session check independently.
Verification:
- Type-check passes (tsc clean for the spec; pre-existing errors
in unrelated test files unchanged)
- Can't run staging E2E locally without CP admin token; CI will
exercise the real path against the freshly-provisioned tenant
- E2E Staging SaaS (full lifecycle) is currently green at 08:07Z,
confirming the underlying staging infra works — the failures
have been narrowly in this Playwright-tabs spec
Targets staging per molecule-core convention.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds a janitor workflow that runs every hour and deletes any
e2e-prefixed staging org older than MAX_AGE_MINUTES (default 120).
Catches orgs left behind when per-test-run teardown didn't fire:
CI cancellation, runner crash, transient AWS error mid-cascade,
bash trap missed (signal 9), etc.
Why it exists despite per-run teardown:
- Per-run teardown is best-effort by definition. Any process death
after the test starts but before the trap fires leaves debris.
- GH Actions cancellation kills the runner with no grace period —
the workflow's `if: always()` step usually catches this but can
still fail on transient CP 5xx at the wrong moment.
- The CP cascade itself has best-effort branches today
(cascadeTerminateWorkspaces logs+continues on individual EC2
termination failures; DNS deletion same shape). Those need
cleanup-correctness work in the CP, but a safety net belongs in
CI either way — defense in depth.
Behaviour:
- Cron every hour. Manual workflow_dispatch with overrideable
max_age_minutes + dry_run inputs for one-off cleanups.
- Concurrency group prevents two sweeps fighting.
- SAFETY_CAP=50 — refuses to delete more than 50 orgs in a single
tick. If the CP admin endpoint goes weird and returns no
created_at (or returns no orgs at all), every e2e-* would look
stale; the cap catches the runaway-nuke case.
- DELETE is idempotent CP-side via org_purges.last_step, so a
half-deleted org from a prior sweep gets picked up cleanly on the
next tick.
- Per-org delete failures don't fail the workflow. Next hourly tick
retries. The workflow only fails loud at the safety-cap gate.
Tonight's specific motivation: ~10 canvas-tabs E2E retries in 2 hours
with various failure modes; each provisioned a fresh tenant + EC2 +
DNS + DB row. Some fraction leaked. Without this loop, ops has to
periodically run the manual sweep-cf-orphans.sh script. With it,
staging self-heals.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The canary workflow has been failing for ~30 consecutive runs (issue
#1500, opened 2026-04-21) on the same line:
[hermes-agent error 500] No LLM provider configured. Run `hermes
model` to select a provider, or run `hermes setup` for first-time
configuration.
Root cause: the canary's env block was missing E2E_OPENAI_API_KEY.
Without it, tests/e2e/test_staging_full_saas.sh provisions the workspace
with empty secrets; template-hermes start.sh seeds ~/.hermes/.env with
no provider keys; derive-provider.sh resolves the model slug
`openai/gpt-4o` to PROVIDER=openrouter (hermes has no native openai
provider in its registry); A2A request at step 8/11 fails with the
"No LLM provider configured" error from hermes-agent.
The full-lifecycle workflow (e2e-staging-saas.yml line 84) carries the
same secret correctly. Mirror its pattern + add a fail-fast preflight
so future regressions surface in <5s instead of after 8 min of
provision-then-die.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Seventh E2E bug, surfaced after the AuthGate mock from the previous
commit finally let the harness reach the tab-iteration loop:
Error: tab-skills button missing — TABS list may have drifted
Locator: locator('#tab-skills')
The TABS bar in SidePanel is `overflow-x-auto` (intentional — there
are 13 tabs and they don't all fit on smaller viewports; the
right-edge fade gradient signals the overflow). Tabs after position
~3 are clipped, and Playwright's `toBeVisible()` returns false for
clipped elements (it checks getBoundingClientRect against viewport).
Fix: `scrollIntoViewIfNeeded()` before the visibility assertion,
mirroring what SidePanel's own keyboard handler does on arrow-key
navigation. The tab is then in view and `toBeVisible()` passes.
This was the test's 7th and (probably) final harness bug. The
chain mapping all the way from "staging E2E timed out at 1200s"
this morning:
1. instance_status field name (#2066)
2. staging.moleculesai.app DNS zone (#2066)
3. X-Molecule-Org-Id TenantGuard header (#2066)
4. Hydration selector waited pre-click (#2066)
5. networkidle never settles (this PR's parent commits)
6. AuthGate /cp/auth/me redirect
7. Tab buttons clipped by overflow-x-auto
If THIS run still fails, the failure surfaces in actual product
behavior (a tab's panel content), not test mechanics.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Sixth E2E bug, surfaced after the page.goto-domcontentloaded fix
finally let the navigation complete. The harness now reaches the
canvas-root selector wait but still times out because the canvas
never renders:
TimeoutError: page.waitForSelector: Timeout 45000ms exceeded.
waiting for [aria-label="Molecule AI workspace canvas"]
Root cause: canvas/src/components/AuthGate.tsx wraps the page,
fetches /cp/auth/me on mount, and redirects to the login page when
the response is 401. The bearer header we set via
context.setExtraHTTPHeaders works for platform API calls but does
NOT satisfy /cp/auth/me — that endpoint is cookie-based (WorkOS
session). So:
1. AuthGate mounts
2. Calls fetchSession() → /cp/auth/me → 401 (no session cookie)
3. AuthGate transitions to anonymous → redirectToLogin()
4. Browser navigates away from tenant URL
5. The React Flow canvas root with the aria-label never mounts
6. waitForSelector times out at 45s
Fix: context.route() intercepts /cp/auth/me and returns a fake
Session JSON so AuthGate resolves to "authenticated" and renders
its children. The session contents are cosmetic — Session.org_id
and Session.user_id appear in a few canvas surfaces but never fail
on dummy values.
This is the cleanest fix path. Alternatives considered + rejected:
- Add a ?e2e=1 backdoor to AuthGate: production code shouldn't
have a "skip auth" flag, even gated.
- Real WorkOS login flow in Playwright: too much overhead per run.
- Skip the canvas UI test, test only API: defeats the point of
the staging E2E (which is to catch UI regressions before
promotion).
After this lands the harness should reach the workspace-node click
step and exercise tabs — only then can a real product bug (rather
than a test-harness bug) surface. The 6-bug chain mapped to:
1. instance_status field name (#2066)
2. staging.moleculesai.app DNS zone (#2066)
3. X-Molecule-Org-Id TenantGuard header (#2066)
4. Hydration selector waited pre-click (#2066)
5. networkidle never settles (this commit's parent)
6. AuthGate /cp/auth/me redirect (this commit)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Fifth E2E bug surfaced by the previous run. After the four setup-
phase fixes (instance_status, DNS zone, X-Molecule-Org-Id, hydration
selector) plus CP#259 ending the pq cache class, the harness finally
reached the actual page navigation step — and timed out there:
TimeoutError: page.goto: Timeout 45000ms exceeded.
navigating to "https://...staging.moleculesai.app/", waiting until "networkidle"
`waitUntil: "networkidle"` waits for 500ms of network silence. The
canvas keeps a WebSocket connection open + polls /events and
/workspaces every few seconds for status updates, so the network
is never idle — page.goto sits on it until the default 45s timeout
and throws.
Fix: switch to `waitUntil: "domcontentloaded"`. Returns as soon as
the HTML is parsed. React hydration plus the existing
`waitForSelector` line below is what actually gates ready-for-
interaction; the goto's job is just to land on the page.
This is a generally-applicable lesson — networkidle is broken for
any SPA with a heartbeat. Notably, our existing canvas unit tests
that mock @xyflow/react and don't open WebSockets DON'T hit this,
which is why this only surfaces against staging.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Fourth E2E bug in the staging→main chain. The previous three (#2066
setup-phase fixes) let the harness reach the actual Playwright spec.
This one is in staging-tabs.spec.ts itself.
The spec at L78 waits 45s for one of:
[role="tablist"], [data-testid="hydration-error"]
Both targets are wrong:
1. [role="tablist"] only appears AFTER the workspace node is
clicked (which happens 25 lines later at L100). Waiting for
it BEFORE the click can never resolve, so the wait always
times out at 45s regardless of whether the canvas actually
loaded.
2. [data-testid="hydration-error"] doesn't exist anywhere in
the canvas. The error banner at app/page.tsx:62 only had
role="alert" — which collides with toast notifications and
other alert-type elements, so a more-specific selector was
never wired.
Two-part fix:
- Test waits on `[aria-label="Molecule AI workspace canvas"]`
instead — that's the React Flow wrapper (Canvas.tsx:150),
always present once hydrated regardless of workspace count
or selection state. Hydration-error banner remains the
secondary OR target for the failure path.
- app/page.tsx hydration-error banner gets the missing
`data-testid="hydration-error"` attribute. role="alert"
stays for accessibility; the testid is for programmatic
detection without conflict.
After this lands, the staging-tabs spec should advance past the
initial wait, click the workspace node, and exercise each tab.
If a tab fails, we get a proper test failure rather than a 45s
timeout that obscures everything.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Third E2E bug in the staging→main chain, found while debugging the
\`Workspace create 404\` failure that surfaced after the previous two
E2E fixes (instance_status, staging.moleculesai.app DNS).
Root cause: workspace-server's \`middleware/TenantGuard\` middleware
returns 404 (not 401/403, intentionally — see comment in
\`tenant_guard.go\`: "must not be inferable by probing other orgs'
machines") when a request to the tenant origin lacks one of:
- X-Molecule-Org-Id header matching MOLECULE_ORG_ID env on the tenant
- Fly-Replay-Src state from the CP router (production browser path)
- Same-origin Canvas (Referer == Host)
The E2E was a direct GitHub-Actions curl with neither — every non-
allowlisted route 404'd with the platform's ratelimit headers but
none of the security headers, which made it look like a missing
route in the platform.
The org UUID is already on the admin-orgs row alongside instance_status,
so capture it during the readiness poll and add it to the tenantAuth
header bag. Both /workspaces (POST) and /workspaces/:id (GET) now
carry it.
Allowlist still contains /health, /metrics, /registry/register,
/registry/heartbeat — so the TLS readiness step (which hits /health)
keeps working without the header.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Second related E2E bug, surfaced after #2066's instance_status fix
let the harness reach the TLS readiness step:
Error: tenant TLS: timed out after 180s
The CP provisioner writes staging tenant DNS as
<slug>.staging.moleculesai.app (with the staging. subdomain
prefix — visible in the EC2 provisioner DNS log line). The harness
was building https://<slug>.moleculesai.app (prod-zone shape),
so DNS literally didn't resolve, fetch threw NXDOMAIN inside the
silent catch, and waitFor saw null on every 5s poll until 180s
elapsed.
Fix: parameterize as STAGING_TENANT_DOMAIN env var, default
staging.moleculesai.app. Doc-comment example updated to match.
Override hatch is there only for ops running this harness against
a non-default zone.
Verified manually: a freshly-provisioned tenant
(e2e-canvas-20260425-sav9fe) was unreachable at the prod-shaped
URL (NXDOMAIN) but reached CF at the staging-shaped URL.
teardown.ts only hits CP, not the tenant URL — no fix needed there.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Staging Canvas Playwright E2E has been timing out at 1200s on every
recent run. Found via /code-review-and-quality on the staging→main
promotion chain.
The CP /cp/admin/orgs response shape is (handlers/admin.go:118):
type adminOrgSummary struct {
...
InstanceStatus string `json:"instance_status,omitempty"`
...
}
There is NO top-level `status` field. The waitFor predicate compared
`row.status === "running"` against undefined on every poll — the
predicate could never resolve truthy. The harness invariably wedged
on the 20-min timeout regardless of whether the tenant was actually
provisioned.
This bug has been double-edged:
- It MASKED the #242 pq-cache-collision class for hours: the
tenants WERE provisioning fine, but the test couldn't tell.
- It survived #255, #257 (real CP fixes) — the test still timed
out, making us suspect more CP bugs that didn't exist.
Fix: poll `row.instance_status` instead. One-line change. Identical
fix for the failed-state branch one line below.
No new tests for the harness itself; the fix's correctness is
verified by the next E2E run on the affected branch passing
end-to-end. If it doesn't pass after this, there's a separate
bug we can hunt cleanly.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the "main merged but prod tenants still on old image" gap.
## Trigger chain
main merge
└─> publish-workspace-server-image (builds + pushes :latest + :<sha>)
└─> redeploy-tenants-on-main (this workflow)
└─> POST https://api.moleculesai.app/cp/admin/tenants/redeploy-fleet
└─> Canary hongmingwang + 60s soak, then batches of 3
with SSM Run Command redeploying each tenant EC2
## Features
- Auto-fires on every successful publish-workspace-server-image run.
- Manual dispatch with optional target_tag (for rollback to an older
SHA), canary_slug override, batch_size, dry_run.
- 30s delay before calling CP so GHCR edge cache serves the new
:latest consistently to every tenant's docker pull.
- Skips when publish job failed (workflow_run fires on any completion).
- Job summary renders per-tenant results as a markdown table so ops
can see which tenant, if any, broke the chain.
- Exits non-zero on HTTP != 200 or ok=false so a broken rollout marks
the commit status red.
## Secrets + vars required
- secret CP_ADMIN_API_TOKEN — Railway prod molecule-platform / CP_ADMIN_API_TOKEN
Mirrored into this repo's secrets.
- var CP_URL (optional) — defaults to https://api.moleculesai.app
## Paired with
- Molecule-AI/molecule-controlplane branch feat/tenant-auto-redeploy
which adds the /cp/admin/tenants/redeploy-fleet endpoint + the SSM
orchestration. This workflow is a no-op until that lands on prod CP.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The script's own help text documents \`MAX_DELETE_PCT=62 ./sweep-cf-orphans.sh\`
as the way to relax the safety gate, but the in-script assignment on line 35
was unconditional and overwrote any env value — so the override never worked.
During today's staging tenant-provision recovery (CP #255 context), hit the
57%-delete threshold and needed the documented override to clear 64 orphan
records. The one-char change to \`\${MAX_DELETE_PCT:-50}\` honors the env
while keeping the 50% default when no caller overrides.
Ran with MAX_DELETE_PCT=62 after the fix — deleted 64 records, CF zone 111→47.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>