Two changes that close one of the leak classes from the
molecule-controlplane#420 vCPU audit:
1. sweep-stale-e2e-orgs.yml: cron */15 (was hourly), MAX_AGE_MINUTES
30 (was 120). E2E runs are 8-25 min wall clock; 30 min is safely
above the longest run while shrinking the worst-case leak window
from ~2h to ~45 min (15-min sweep cadence + 30-min threshold).
2. canary-staging.yml teardown: the per-slug DELETE used `>/dev/null
|| true`, which swallowed every failure. A 5xx or timeout from CP
looked identical to "successfully deleted" and the canary tenant
kept eating ~2 vCPU until the sweeper caught it. Now we capture
the response code and surface non-2xx as a workflow warning that
names the leaked slug.
The exit semantics stay unchanged — a single-canary cleanup miss
shouldn't fail-flag the canary itself when the actual smoke check
passed. The sweeper is the safety net for whatever slips past.
Caught during the molecule-controlplane#420 audit on 2026-05-03 —
3 e2e canary tenant orphans were running for 24-95 min, all under
the previous 120-min sweep threshold so they went unnoticed until
manual cleanup. Same `|| true` pattern exists in
e2e-staging-{canvas,external,saas,sanity}.yml; out of scope for
this PR (mechanical port; tracking separately) but the sweeper
tightening covers all of them by reducing the safety-net latency.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The redeploy-tenants-on-staging soft-warn filter and the
sweep-stale-e2e-orgs janitor both hardcoded `^e2e-` to identify
ephemeral test tenants. Runtime-test harness fixtures (RFC #2251)
mint slugs prefixed with `rt-e2e-`, which neither matcher recognized.
Concrete impact observed today:
- Two `rt-e2e-v{5,6}-*` tenants left orphaned 8h on staging
(sweep-stale-e2e-orgs ignored them).
- On the next staging redeploy their phantom EC2s returned
`InvalidInstanceId: Instances not in a valid state for account`
from SSM SendCommand → CP returned HTTP 500 + ok=false.
- The redeploy soft-warn missed them too, so the workflow went
red, which broke the auto-promote-staging chain feeding the
canvas warm-paper rollout to prod.
Fix: switch both matchers to recognize the alternation
`^(e2e-|rt-e2e-)`. Long-lived prefixes (demo-prep, dryrun-*, dryrun2-*)
remain non-ephemeral and continue to hard-fail. Comment documents
the source-of-truth list and the cross-file invariant.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds a janitor workflow that runs every hour and deletes any
e2e-prefixed staging org older than MAX_AGE_MINUTES (default 120).
Catches orgs left behind when per-test-run teardown didn't fire:
CI cancellation, runner crash, transient AWS error mid-cascade,
bash trap missed (signal 9), etc.
Why it exists despite per-run teardown:
- Per-run teardown is best-effort by definition. Any process death
after the test starts but before the trap fires leaves debris.
- GH Actions cancellation kills the runner with no grace period —
the workflow's `if: always()` step usually catches this but can
still fail on transient CP 5xx at the wrong moment.
- The CP cascade itself has best-effort branches today
(cascadeTerminateWorkspaces logs+continues on individual EC2
termination failures; DNS deletion same shape). Those need
cleanup-correctness work in the CP, but a safety net belongs in
CI either way — defense in depth.
Behaviour:
- Cron every hour. Manual workflow_dispatch with overrideable
max_age_minutes + dry_run inputs for one-off cleanups.
- Concurrency group prevents two sweeps fighting.
- SAFETY_CAP=50 — refuses to delete more than 50 orgs in a single
tick. If the CP admin endpoint goes weird and returns no
created_at (or returns no orgs at all), every e2e-* would look
stale; the cap catches the runaway-nuke case.
- DELETE is idempotent CP-side via org_purges.last_step, so a
half-deleted org from a prior sweep gets picked up cleanly on the
next tick.
- Per-org delete failures don't fail the workflow. Next hourly tick
retries. The workflow only fails loud at the safety-cap gate.
Tonight's specific motivation: ~10 canvas-tabs E2E retries in 2 hours
with various failure modes; each provisioned a fresh tenant + EC2 +
DNS + DB row. Some fraction leaked. Without this loop, ops has to
periodically run the manual sweep-cf-orphans.sh script. With it,
staging self-heals.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>