Commit Graph

253 Commits

Author SHA1 Message Date
Hongming Wang
6125700c39 test(e2e): plug /tmp scratch leaks in 3 shell E2E tests + add CI lint gate (RFC #2873 iter 2)
Three shell E2E tests created scratch files via `mktemp` but never
deleted them on early exit (assertion failure, SIGINT, errexit). Each
CI run leaked ~10-100 KB of /tmp into the runner; over ~200 runs/week
that's 20+ MB of accumulated cruft.

## Files

- **test_chat_attachments_e2e.sh** — was missing both trap and rm;
  added per-run TMPDIR_E2E with `trap rm -rf … EXIT INT TERM`.
- **test_notify_attachments_e2e.sh** — had a `cleanup()` for the
  workspace but didn't include the TMPF; only an unconditional
  `rm -f` at the bottom (line 233) which doesn't fire on early exit.
  Extended cleanup() to also rm the scratch + dropped the redundant
  trailing rm.
- **test_chat_attachments_multiruntime_e2e.sh** — `round_trip()`
  function had per-call `rm -f` only on the success path; failure
  paths leaked. Switched to script-level TMPDIR_E2E + trap; per-call
  rm dropped (the trap handles every return path including SIGINT).

Pattern: `mktemp -d -t prefix-XXX` for the dir, `mktemp <full-template>`
for files (portable across BSD/macOS + GNU coreutils — `-p` is
GNU-only and breaks Mac local-dev runs).

## Regression gate

New `tests/e2e/lint_cleanup_traps.sh` asserts every `*.sh` that calls
`mktemp` also has a `trap … EXIT` line in the file. Wired into the
existing Shellcheck (E2E scripts) CI step. Verified locally: passes
on the fixed state, fails-loud when one of the 3 fixes is reverted.

## Verification

  - shellcheck --severity=warning clean on all 4 touched files
  - lint_cleanup_traps.sh passes on the post-fix tree (6 mktemp users,
    all have EXIT trap)
  - Negative test: revert one fix → lint exits 1 with file:line +
    suggested fix pattern in the error message (CI-grokkable
    ::error file=… annotation)
  - Trap fires on SIGTERM mid-run (smoke-tested on macOS BSD mktemp)
  - Trap fires on `exit 1` (smoke-tested)

## Bars met (7-axis)

  - SSOT: trap pattern documented in lint message (one rule, one fix)
  - Cleanup: this IS the cleanup hygiene fix
  - 100% coverage: lint catches future regressions across all
    `tests/e2e/*.sh` files, not just the 3 fixed today
  - File-split: N/A (no files split)
  - Plugin / abstract / modular: N/A (test infra, not product code)

Iteration 2 of RFC #2873.
2026-05-05 04:21:26 -07:00
Hongming Wang
42f2ea3f4f fix(ci): include event_name in runtime-prbuild-compat concurrency group
Every staging push run for the last 4 SHAs was cancelled by the
matching pull_request run because both fired into the same
concurrency group:

  group: ${{ github.workflow }}-${{ ...sha }}

Same SHA → same group → cancel-in-progress=true means the second
arrival cancels the first. Empirically the push run lost the race;
staging branch-protection then saw a CANCELLED required check and
the auto-promote chain stalled.

Fix: include github.event_name in the group key. push and
pull_request runs for the same SHA now hash to different groups,
both complete, both report SUCCESS to branch protection.

Pattern of the bug:

  10:46 sha=1e8d7ae1 ev=pull_request conclusion=success
  10:46 sha=1e8d7ae1 ev=push        conclusion=cancelled
  10:45 sha=ecf5f6fb ev=pull_request conclusion=success
  10:45 sha=ecf5f6fb ev=push        conclusion=cancelled
  10:28 sha=471dff25 ev=pull_request conclusion=success
  10:28 sha=471dff25 ev=push        conclusion=cancelled
  10:12 sha=9e678ccd ev=pull_request conclusion=success
  10:12 sha=9e678ccd ev=push        conclusion=cancelled

Same drift class as the 2026-04-28 auto-promote-staging incident
(memory: feedback_concurrency_group_per_sha.md) — globally-scoped
groups silently cancel runs in matched-SHA scenarios.

This is the only workflow in .github/workflows/ that uses the
narrow per-sha shape without event_name. Others either don't use
concurrency at all, or use ${{ github.ref }} which is event-
neutral.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-05 04:01:20 -07:00
Hongming Wang
90d202c80a ci(handlers-pg): apply all migrations with skip-on-error + sanity check (#320)
Previous workflow applied only 049_delegations.up.sql — fragile to
future migrations that touch the delegations table or any other
handlers/-tested table. Operator would have to remember to update
the workflow's psql -f line per migration.

New behavior: loop every .up.sql in lexicographic order, apply each
with ON_ERROR_STOP=1 + per-migration result captured. Failed migrations
are SKIPPED rather than blocking the suite — handles the historical
migrations (017_memories_fts_namespace, 042_a2a_queue, etc.) that
depend on tables since renamed/dropped and can't replay from scratch.
Migrations that DO succeed land their tables, which is sufficient for
the integration tests in handlers/.

Sanity gate at the end: if the delegations table is missing after the
replay, hard-fail with a loud error. That catches a real regression
where 049 itself becomes broken (e.g., schema rename), separate from
the historical-broken-migration noise above.

Per-migration log line ("✓" or "⊘ skipped") makes it easy to spot
when a migration that SHOULD have replayed didn't.

Verified locally: full migration chain runs, 049 lands, all 7
integration tests pass against the chained-migration DB.

Closes #320.
2026-05-05 03:48:43 -07:00
Hongming Wang
4c9f12258d fix(delegations): preserve result_preview through completion + add real-Postgres integration gate
Two-part PR:

## Fix: result_preview was lost on completion

Self-review of #2854 caught a real bug. SetStatus has a same-status
replay no-op; the order of calls in `executeDelegation` completion
+ `UpdateStatus` completed branch clobbered the preview field:

  1. updateDelegationStatus(completed, "")    fires
  2. inner recordLedgerStatus(completed, "", "")
       → SetStatus transitions dispatched → completed with preview=""
  3. outer recordLedgerStatus(completed, "", responseText)
       → SetStatus reads current=completed, status=completed
       → SAME-STATUS NO-OP, never writes responseText → preview lost

Confirmed against real Postgres (see integration test). Strict-sqlmock
unit tests passed because they pin SQL shape, not row state.

Fix: call the WITH-PREVIEW recordLedgerStatus FIRST, then
updateDelegationStatus. The inner call becomes the no-op (correctly
preserves the row written by the outer call).

Same gap fixed in UpdateStatus handler — body.ResponsePreview was
never landing in the ledger because updateDelegationStatus's nested
SetStatus(completed, "", "") fired first.

## Gate: real-Postgres integration tests + CI workflow

The unit-test-only workflow that shipped #2854 was the root cause.
Adding two layers of defense:

1. workspace-server/internal/handlers/delegation_ledger_integration_test.go
   — `//go:build integration` tag, requires INTEGRATION_DB_URL env var.
   4 tests:
     * ResultPreviewPreservedThroughCompletion (regression gate for the
       bug above — fires the production call sequence in fixed order
       and asserts row.result_preview matches)
     * ResultPreviewBuggyOrderIsLost (DIAGNOSTIC: confirms the
       same-status no-op contract works as designed; if SetStatus's
       semantics ever change, this test fires)
     * FailedTransitionCapturesErrorDetail (failure-path symmetry)
     * FullLifecycle_QueuedToDispatchedToCompleted (forward-only +
       happy path)

2. .github/workflows/handlers-postgres-integration.yml
   — required check on staging branch protection. Spins postgres:15
   service container, applies the delegations migration, runs
   `go test -tags=integration` against the live DB. Always-runs +
   per-step gating on path filter (handlers/wsauth/migrations) so
   the required-check name is satisfied on PRs that don't touch
   relevant code.

Local dev workflow (file header documents this):

  docker run --rm -d --name pg -e POSTGRES_PASSWORD=test -p 55432:5432 postgres:15-alpine
  psql ... < workspace-server/migrations/049_delegations.up.sql
  INTEGRATION_DB_URL="postgres://postgres:test@localhost:55432/molecule?sslmode=disable" \
    go test -tags=integration ./internal/handlers/ -run "^TestIntegration_"

## Why this matters

Per memory `feedback_mandatory_local_e2e_before_ship`: backend PRs
MUST verify against real Postgres before claiming done. sqlmock pins
SQL shape; only a real DB can verify row state. The workflow makes
this gate mandatory rather than optional.
2026-05-05 02:47:52 -07:00
Hongming Wang
c89f17a2aa fix(branch-protection-drift): hard-fail on schedule only, soft-skip + warn on PR
#2834 added a hard-fail when GH_TOKEN_FOR_ADMIN_API is missing on
schedule + pull_request + workflow_dispatch. The PR-trigger hard-fail
is now blocking every PR in the repo because the secret hasn't been
provisioned yet — including the staging→main auto-promote PR (#2831),
which has no path to set repo secrets itself.

Per feedback_schedule_vs_dispatch_secrets_hardening.md the original
concern is automated/silent triggers losing the gate without a human
to notice. That concern applies to **schedule** specifically:

- schedule: cron, no human, silent soft-skip = invisible regression →
  KEEP HARD-FAIL.
- pull_request: a human is reviewing the PR diff and will see workflow
  warnings inline. A PR cannot retroactively drift live state — drift
  happens *between* PRs (UI clicks, manual gh api PATCH), which the
  schedule canary catches. The PR-time gate would only catch typos in
  apply.sh, which the *_payload unit tests catch more directly.
  → SOFT-SKIP with a prominent warning.
- workflow_dispatch: operator override, may not have configured the
  secret yet. → SOFT-SKIP with warning.

The skip is explicit (SKIP_DRIFT_CHECK=1 surfaced to env, then a step
`if:` guard) so it's auditable in the workflow run UI, not silently
swallowed.

Unblocks #2831 (auto-promote staging→main) + every PR currently behind
this check.
2026-05-04 21:20:30 -07:00
Hongming Wang
2e505e7748 fix(branch-protection): apply.sh respects live state + full-payload drift
Multi-model review of #2827 caught: the script as-shipped would have
silently weakened branch protection on EVERY non-checks dimension
the moment anyone ran it. Live staging had

  enforce_admins=true, dismiss_stale_reviews=false, strict=true,
  allow_fork_syncing=false, bypass_pull_request_allowances={
    HongmingWang-Rabbit + molecule-ai app
  }

Script wrote the opposite for all five. Per memory
feedback_dismiss_stale_reviews_blocks_promote.md, the
dismiss_stale_reviews flip alone is the load-bearing one — would
silently re-block every auto-promote PR (cost user 2.5h once).

This PR:

1. apply.sh: per-branch payloads (build_staging_payload /
   build_main_payload) that codify the deliberate per-branch policy
   already on the repo, with the script's net contribution being
   ONLY the new check names (Canvas tabs E2E + E2E API Smoke on
   staging, Canvas tabs E2E on main).

2. apply.sh: R3 preflight that hits /commits/{sha}/check-runs and
   asserts every desired check name has at least one historical run
   on the branch tip. Catches typos like "Canvas Tabs E2E" vs
   "Canvas tabs E2E" — pre-fix a typo would silently block every PR
   forever waiting for a context that never emits. Skip via
   --skip-preflight for genuinely-new workflows whose first run
   hasn't fired.

3. drift_check.sh: compares the FULL normalised payload (admin,
   review, lock, conversation, fork-syncing, deletion, force-push)
   not just the checks list. Pre-fix the drift gate would have
   missed a UI click that flipped enforce_admins or
   dismiss_stale_reviews. Drops app_id from the comparison since
   GH auto-resolves -1 to a specific app id post-write.

4. branch-protection-drift.yml: per memory
   feedback_schedule_vs_dispatch_secrets_hardening.md — schedule +
   pull_request triggers HARD-FAIL when GH_TOKEN_FOR_ADMIN_API is
   missing (silent skip masks the gate disappearing).
   workflow_dispatch keeps soft-skip for one-off operator runs.

Verified by running drift_check against live state: pre-fix would
have shown 5 destructive drifts on staging + 5 on main. Post-fix
shows ONLY the 2 intended additions on staging + 1 on main, which
go away after `apply.sh` runs.
2026-05-04 20:52:11 -07:00
Hongming Wang
7cc1c39c49 ci: e2e coverage matrix + branch-protection-as-code
Closes #9.

Three pieces, all small:

1. **docs/e2e-coverage.md** — source of truth for which E2E suites
   guard which surfaces. Today three were running but informational
   only on staging; that's how the org-import silent-drop bug shipped
   without a test catching it pre-merge. Now the matrix shows what's
   required where + a follow-up note for the two suites that need an
   always-emit refactor before they can be required.

2. **tools/branch-protection/apply.sh** — branch protection as code.
   Lets `staging` and `main` required-checks live in a reviewable
   shell script instead of UI clicks that get lost between admins.
   This PR's net change: add `E2E API Smoke Test` and `Canvas tabs E2E`
   as required on staging. Both already use the always-emit path-filter
   pattern (no-op step emits SUCCESS when the workflow's paths weren't
   touched), so making them required can't deadlock unrelated PRs.

3. **branch-protection-drift.yml** — daily cron + drift_check.sh
   that compares live protection against apply.sh's desired state.
   Catches out-of-band UI edits before they drift further. Fails the
   workflow on mismatch; ops re-runs apply.sh or updates the script.

Out of scope (filed as follow-ups):
- e2e-staging-saas + e2e-staging-external use plain `paths:` filters
  and never trigger when paths are unchanged. They need refactoring
  to the always-emit shape (same as e2e-api / e2e-staging-canvas)
  before they can be required.
- main branch protection mirrors staging here; if main wants the
  E2E SaaS / External added later, do it in apply.sh and rerun.

Operator must apply once after merge:
  bash tools/branch-protection/apply.sh
The drift check picks it up from there.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 20:21:59 -07:00
Hongming Wang
8df8487bbe fix(auto-promote): treat E2E completed/cancelled as defer, not failure
Bug: the case statement at line 189 grouped completed/failure |
completed/cancelled | completed/timed_out into the same "abort
+ exit 1" branch. cancelled ≠ failure — when per-SHA concurrency
(memory: feedback_concurrency_group_per_sha) cancels an older E2E
run because a newer push landed, the workflow blocked the whole
auto-promote chain on a non-failure.

Caught 2026-05-05 02:03 on sha 31f9a5e: E2E got cancelled by
concurrency, auto-promote :latest aborted with exit 1, the next
auto-promote-staging cycle had to manually clean up.

Split: failure/timed_out keep the abort path. cancelled gets its
own clean-defer branch (same shape as in_progress) — proceed=false
without exit 1, with a step-summary explaining likely concurrency
supersession and pointing operators at manual dispatch if they
need that specific SHA promoted.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 19:26:29 -07:00
Hongming Wang
c5dd14d8db fix(workflows): preserve curl stderr in 8 status-capture sites
Self-review of PR #2810 caught a regression: my mass-fix added
`2>/dev/null` to every curl invocation, suppressing stderr. The
original `|| echo "000"` shape only swallowed exit codes — stderr
(curl's `-sS`-shown dial errors, timeouts, DNS failures) still went
to the runner log so operators could see WHY a connection failed.

After PR #2810 the next deploy failure would log only the bare
HTTP code with no context. That's exactly the kind of diagnostic
loss that makes outages take longer to triage.

Drop `2>/dev/null` from each curl line — keep it on the `cat`
fallback (which legitimately suppresses "no such file" when curl
crashed before -w ran). The `>tempfile` redirect alone captures
curl's stdout (where -w writes) without touching stderr.

Same 8 files as #2810: redeploy-tenants-on-{main,staging},
sweep-stale-e2e-orgs, e2e-staging-{sanity,saas,external,canvas},
canary-staging.

Tests:
- All 8 files pass the lint
- YAML valid

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 18:54:50 -07:00
Hongming Wang
463316772b fix(workflows): rewrite curl status-capture to prevent exit-code pollution
The 2026-05-04 redeploy-tenants-on-main run for sha 2b862f6 emitted
"HTTP 000000" and failed the deploy. Root cause: when curl exits non-
zero (connection reset → 56, --fail-with-body 4xx/5xx → 22), the
`-w '%{http_code}'` already wrote a status to stdout; the inline
`|| echo "000"` then fires AND appends another "000" to the captured
substitution stdout. Result: HTTP_CODE="<actual><000>" — fails string
comparisons against "200" while looking superficially right.

Same class of bug the synth-E2E §7c gate hit twice (PRs #2779/#2783
+ #2797). Memory feedback_curl_status_capture_pollution.md.

Mass fix in 8 workflows: route -w into a tempfile so curl's exit
code can't pollute stdout. Wrap with set +e/-e so the non-zero
curl exit doesn't trip the outer pipeline.

  redeploy-tenants-on-main.yml      (production-critical, caught the bug)
  redeploy-tenants-on-staging.yml   (sibling)
  sweep-stale-e2e-orgs.yml          (cleanup loop)
  e2e-staging-sanity.yml             (E2E safety-net teardown)
  e2e-staging-saas.yml
  e2e-staging-external.yml
  e2e-staging-canvas.yml
  canary-staging.yml

Plus a new lint workflow `lint-curl-status-capture.yml` that runs on
every PR/push touching `.github/workflows/**`. Multi-line aware:
collapses bash `\` continuations, then matches the buggy
$(curl ... -w '%{http_code}' ... || echo "000") subshell shape.
Distinguishes from the SAFE $(cat tempfile || echo "000") shape
(cat with missing file emits empty stdout, no pollution).

Verified:
- All 8 workflows pass the lint locally
- A known-bad injection is caught
- A known-safe cat-fallback passes through
- yaml.safe_load clean on all changed files

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 18:29:38 -07:00
Hongming Wang
26fa220bef ci(coverage): per-file 75% floor for MCP/inbox/auth Python critical paths
Closes part of #2790 (Phase A). The Python total floor at 86% (set in
workspace/pytest.ini, issue #1817) averages over ~6000 lines, so a
single MCP-critical file could regress to ~50% with no CI complaint as
long as other modules compensate. This is the same distribution gap
that #1823 closed Go-side: total floor passes while a critical handler
sits at 0%.

Added gates for these five files (per-file floor 75%):
- workspace/a2a_mcp_server.py — MCP dispatcher (PR #2766 / #2771)
- workspace/mcp_cli.py — molecule-mcp standalone CLI entry
- workspace/a2a_tools.py — workspace-scoped tool implementations
- workspace/inbox.py — multi-workspace inbox + per-workspace cursors
- workspace/platform_auth.py — per-workspace token resolver

These handle multi-tenant routing, auth tokens, and inbox dispatch.
Risk shape mirrors Go-side tokens*/secrets* — a 0%/50% file here is
exactly where the PR #2766 dispatcher bug class slips through without
a structural test.

Floor 75% is strictly additive — current actuals 80-96% (measured
2026-05-04). No existing PR fails. Ratchet plan in COVERAGE_FLOOR.md
target 90% by 2026-08-04.

Implementation: pytest already writes .coverage; new step emits a JSON
view scoped to the critical files via `coverage json --include="*name"`,
then jq extracts each file's percent_covered. Exact key match by
basename so workspace/builtin_tools/a2a_tools.py (a different 100%
file) doesn't shadow workspace/a2a_tools.py.

Verified locally with the actual coverage data:
- floor=75 → 0 failures (matches current state)
- floor=81 → 1 failure (a2a_tools.py at 80%) — proves the gate trips

Pairs with PR #2791 (Phase B — schema↔dispatcher AST drift gate). Phase
C (molecule-mcp e2e harness) remains the largest piece in #2790.

YAML validated locally before commit per
feedback_validate_yaml_before_commit.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 16:35:21 -07:00
Hongming Wang
ff1003e5f6 ci(canary): bump timeout-minutes 12 → 20 to absorb apt tail latency
Today's 4 cancelled canaries (25319625186 / 25320942822 / 25321618230 /
25322499952) were all blown by the workflow timeout despite the
underlying tenant boot completing successfully (PR molecule-controlplane#455
fix verified — boot events all reach `boot_script_finished/ok`).

Why the budget was wrong:

The tenant user-data install phase runs apt-get update + install of
docker.io / jq / awscli / caddy / amazon-ssm-agent FROM RAW UBUNTU on
every tenant boot — none of it is pre-baked into the tenant AMI
(EC2_AMI=ami-0ea3c35c5c3284d82, raw Jammy 22.04). Empirical
fetch_secrets/ok timing across today's canaries:

  51s   debug-mm-1777888039 (09:47Z)
  82s   25319625186          (12:42Z)
  143s  25320942822          (13:11Z)
  625s  25322499952          (13:43Z)

Same EC2_AMI, same instance type (t3.small), same user-data install
sequence — variance is entirely apt-mirror tail latency. A 12-min job
budget leaves only ~2 min for the workspace on slow-apt days; the
workspace itself needs ~3.5 min for claude-code cold boot, so the
budget is structurally too tight whenever apt is slow.

20 min absorbs even the 10+ min boot worst-case and still leaves the
workspace its full ~7 min budget. Cap stays well under the runner's
6-hour ubuntu-latest job ceiling.

Real fix: pre-bake caddy + ssm-agent into the tenant AMI so the boot
phase is no-ops on cached pkgs (will file controlplane#TBD as
follow-up — packer/install-base.sh today only bakes the WORKSPACE thin
AMI, not the tenant AMI; tenants always boot from raw Ubuntu).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 07:02:12 -07:00
Hongming Wang
032c011b37 ci: bump continuous-synth-e2e cadence 3→6 fires/hour, all clean slots
Change cron from '10,30,50' (3 fires/hour) to '2,12,22,32,42,52'
(6 fires/hour). All new slots are 1-3 min away from any other
cron, avoiding both the cf-sweep collisions (:15, :45) and the
:30 heavy slot (canary-staging /30, sweep-aws-secrets,
sweep-stale-e2e-orgs every :15).

Why: empirically 2026-05-04 the canary fired only once per hour
on the 10,30,50 schedule (see #2726). Bumping fires-per-hour
gives more chances to land a survived fire under GH's load-
related drop ratio, and keeping all slots in clean lanes
minimizes the per-fire drop probability.

At empirically-observed ~67% drop ratio, 6 attempts/hour yields
~2 effective fires = ~30 min cadence; closer to the 20-min
target than the current shape and provides a real degradation
alarm if drops get worse.

Cost: ~$0.50/day → ~$1/day. Negligible.

Closes #2726.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-04 05:10:48 -07:00
Hongming Wang
98f883cb99 e2e: add direct-Anthropic LLM-key path alongside MiniMax + OpenAI
Adds a third secrets-injection branch in test_staging_full_saas.sh
behind a new E2E_ANTHROPIC_API_KEY env var, wired into all three
auto-running E2E workflows (canary-staging, e2e-staging-saas,
continuous-synth-e2e) via a new MOLECULE_STAGING_ANTHROPIC_API_KEY
repo secret slot.

Operator motivation: after #2578 (the staging OpenAI key went over
quota and stayed dead 36+ hours) we shipped #2710 to migrate the
canary + full-lifecycle E2E to claude-code+MiniMax. Discovered post-
merge that MOLECULE_STAGING_MINIMAX_API_KEY had never been set after
the synth-E2E migration on 2026-05-03 either — synth has been red the
whole time, not just OpenAI quota.

Setting up a MiniMax billing account from scratch is non-trivial
(needs platform-specific signup, KYC, top-up). Operators who already
have an Anthropic API key for their own Claude Code session can now
just set MOLECULE_STAGING_ANTHROPIC_API_KEY and have all three
auto-running E2E gates green within one cron firing.

Priority chain in test_staging_full_saas.sh (first non-empty wins):
  1. E2E_MINIMAX_API_KEY      → MiniMax (cheapest)
  2. E2E_ANTHROPIC_API_KEY    → direct Anthropic (cheaper than gpt-4o,
                                lower setup friction than MiniMax)
  3. E2E_OPENAI_API_KEY       → langgraph/hermes paths

Verify-key case-statement in all three workflows accepts EITHER
MiniMax OR Anthropic for runtime=claude-code; error message names
both options so operators know they don't have to register a MiniMax
account if they already have an Anthropic key.

Pinned to runtime=claude-code — hermes/langgraph use OpenAI-shaped
envs and won't honour ANTHROPIC_API_KEY without further wiring.

After this lands + secret is set, the dispatched canary verifies the
new path:
  gh workflow run canary-staging.yml --repo Molecule-AI/molecule-core --ref staging
2026-05-04 00:51:14 -07:00
Hongming Wang
eaee113416 e2e-staging-saas: same migration off OpenAI default to claude-code+MiniMax
Bundles the same hermes+OpenAI → claude-code+MiniMax migration onto
the full-lifecycle E2E that's been red on every provisioning-critical
push since 2026-05-01. Same root cause as the canary fix in the prior
commit: MOLECULE_STAGING_OPENAI_KEY hit insufficient_quota and there's
no SLA on operator billing top-up.

Same shape as canary commit: claude-code as default runtime + MiniMax
as primary key + hermes/langgraph kept as workflow_dispatch options
with OpenAI fallback. Per-runtime verify-key case-statement matches
canary-staging.yml + continuous-synth-e2e.yml byte-for-byte.

Two extra wrinkles vs canary:
- Dispatch input `runtime` default flipped from "hermes" to "claude-code"
  so operators dispatching from the UI get the safe path by default.
  They can still pick hermes/langgraph from the dropdown when they
  specifically want to exercise OpenAI.
- E2E_MODEL_SLUG is dispatch-aware: MiniMax-M2.7-highspeed for
  claude-code, openai/gpt-4o for hermes (slash-form per
  derive-provider.sh), openai:gpt-4o for langgraph (colon-form per
  init_chat_model). The branch comment in lib/model_slug.sh covers
  the rationale; pinning the slug here keeps the dispatch UX stable
  even when operators don't override.

After this lands + the canary commit lands, the only OpenAI-dependent
E2E surface is the operator-dispatch fallback. The cron canary, the
synth E2E, AND the full-lifecycle gate are all on MiniMax — separate
billing account, no OpenAI quota dependency on auto-runs.
2026-05-04 00:20:36 -07:00
Hongming Wang
6f8f978975 canary-staging: migrate from hermes+OpenAI to claude-code+MiniMax
Mirror the migration continuous-synth-e2e.yml made on 2026-05-03 (#265).
Both workflows hit the same MOLECULE_STAGING_OPENAI_KEY which went over
quota on 2026-05-01 (#2578) and stayed dead — the canary has been red
for 36+ hours waiting on operator billing top-up.

This switch breaks the canary's dependency on OpenAI billing entirely:
claude-code template's `minimax` provider routes ANTHROPIC_BASE_URL to
api.minimax.io/anthropic and reads MINIMAX_API_KEY at boot. MiniMax is
~5-10x cheaper per token than gpt-4.1-mini AND on a separate billing
account, so a future OpenAI quota collapse no longer wedges the
canary's "is staging alive?" signal.

Changes:
- E2E_RUNTIME: hermes → claude-code
- Add E2E_MODEL_SLUG: MiniMax-M2.7-highspeed (pin to MiniMax — the
  per-runtime claude-code default is "sonnet" which routes to direct
  Anthropic and would defeat the cost saving)
- Add E2E_MINIMAX_API_KEY env wired to MOLECULE_STAGING_MINIMAX_API_KEY
- Keep E2E_OPENAI_API_KEY as fallback for operator-dispatched runs that
  set E2E_RUNTIME=hermes via workflow_dispatch
- "Verify OpenAI key present" → per-runtime "Verify LLM key present"
  case statement matching synth E2E's exact shape (claude-code requires
  MiniMax, langgraph/hermes require OpenAI). Hard-fail on missing
  required key per #2578's lesson — soft-skip silently fell through to
  the wrong SECRETS_JSON branch and produced a confusing auth error
  5 min later instead of the clean "secret missing" message at the top.

Verifies #2578 root cause won't recur on the canary path. The synth
E2E and the manual e2e-staging-saas dispatch can still hit OpenAI when
explicitly chosen — only the cron canary moves off it.
2026-05-04 00:18:03 -07:00
Hongming Wang
9689c6f6d5 fix(synth-e2e): verify-secrets step must hard-fail (exit 0 only ends step)
The previous soft-skip-on-dispatch path used `exit 0`, which only
ends the STEP — the rest of the workflow continued with empty
secrets. Caught 2026-05-04 by dispatched run 25296530706:
  - E2E_MINIMAX_API_KEY: empty
  - verify-secrets printed warning + exit 0
  - Install required tools: ran
  - Run synthetic E2E: ran with empty MiniMax key
  - SECRETS_JSON branched to OpenAI shape (MINIMAX empty → fall through)
  - But model slug stayed MiniMax-M2.7-highspeed (workflow env)
  - Workspace booted with OpenAI keys + MiniMax model
  - 5 min later: "Agent error (Exception)" — claude SDK 401'd
    against api.minimax.io with the OpenAI key

The confusing failure mode silently masked the real problem (missing
secret) under a runtime-error label. Fix: drop both soft-skip paths
and exit 1 always. Operators who want to verify a YAML change without
setting up secrets can read the verify-secrets step's stderr — the
failure IS the verification signal.

Pure visibility fix; preserves the cron hard-fail path (now also the
dispatch hard-fail path). No mechanism change beyond the exit code.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 18:32:26 -07:00
Hongming Wang
a306a97dd3 ci(synth-e2e): move cron off :00 to dodge GH scheduler drops
GitHub Actions scheduler de-prioritises :00 cron firings under load.
Empirical 2026-05-03: the canary's cron was '0,20,40 * * * *' but
actual firings landed at :08, :03, :01, :03 — :20 and :40 silently
dropped. Detection latency degraded from claimed 20 min to actual
~60 min worst case.

Move to '10,30,50 * * * *':
- :10/:30/:50 sit 10 min off the top-of-hour load peak
- Still 5 min from :15 sweep-cf-orphans and :45 sweep-cf-tunnels
  (the original constraint that kept us off :15/:45)
- Same 20-min cadence; only the phase changes

No code change beyond the cron expression + comment refresh.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 17:28:45 -07:00
Hongming Wang
8b9e7e6d59 ci: port DELETE-verify pattern to remaining staging e2e workflows
Follow-up to #2648 — same `>/dev/null || true` swallow-on-error
pattern existed in:

  e2e-staging-canvas.yml   (single-slug)
  e2e-staging-saas.yml     (loop)
  e2e-staging-sanity.yml   (loop)
  e2e-staging-external.yml (loop, was `>/dev/null 2>&1` variant)

All four now capture the HTTP code, log a "[teardown] deleted $slug
(HTTP $code)" line on success, and emit a workflow warning naming
the slug + body excerpt on non-2xx. Loop bodies also tally + summarise
total leaks at the end.

Exit semantics unchanged: a single cleanup miss still doesn't fail-flag
the test (sweep-stale-e2e-orgs is the safety net within ~45 min). The
behavior change is purely surfacing — failures that were silent are
now visible on the workflow run page.

Pairs with #2648's tightened sweeper. Together: per-run cleanup
failures are visible AND the safety net catches them quickly.

Closes the per-workflow port noted as out-of-scope in #2648.
See molecule-controlplane#420.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 16:24:43 -07:00
Hongming Wang
3cd8c53de0 ci: tighten e2e cleanup race window 120m -> ~45m worst case
Two changes that close one of the leak classes from the
molecule-controlplane#420 vCPU audit:

1. sweep-stale-e2e-orgs.yml: cron */15 (was hourly), MAX_AGE_MINUTES
   30 (was 120). E2E runs are 8-25 min wall clock; 30 min is safely
   above the longest run while shrinking the worst-case leak window
   from ~2h to ~45 min (15-min sweep cadence + 30-min threshold).

2. canary-staging.yml teardown: the per-slug DELETE used `>/dev/null
   || true`, which swallowed every failure. A 5xx or timeout from CP
   looked identical to "successfully deleted" and the canary tenant
   kept eating ~2 vCPU until the sweeper caught it. Now we capture
   the response code and surface non-2xx as a workflow warning that
   names the leaked slug.

The exit semantics stay unchanged — a single-canary cleanup miss
shouldn't fail-flag the canary itself when the actual smoke check
passed. The sweeper is the safety net for whatever slips past.

Caught during the molecule-controlplane#420 audit on 2026-05-03 —
3 e2e canary tenant orphans were running for 24-95 min, all under
the previous 120-min sweep threshold so they went unnoticed until
manual cleanup. Same `|| true` pattern exists in
e2e-staging-{canvas,external,saas,sanity}.yml; out of scope for
this PR (mechanical port; tracking separately) but the sweeper
tightening covers all of them by reducing the safety-net latency.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 16:08:40 -07:00
Hongming Wang
79a0203798 feat(synth-e2e): switch canary to claude-code + MiniMax-M2.7-highspeed
Cuts the per-run LLM cost ~10x (MiniMax M2.7 vs gpt-4.1-mini) and
removes the recurring OpenAI-quota-exhaustion failure mode that took
the canary down on 2026-05-03 (#265 — staging quota burnt for ~16h).

Path:
  E2E_RUNTIME=claude-code (default)
  → workspace-configs-templates/claude-code-default/config.yaml's
    `minimax` provider (lines 64-69)
  → ANTHROPIC_BASE_URL auto-set to api.minimax.io/anthropic
  → reads MINIMAX_API_KEY (per-vendor env, no collision with
    GLM/Z.ai etc.)

Workflow changes (continuous-synth-e2e.yml):
- Default runtime: langgraph → claude-code
- New env: E2E_MODEL_SLUG (defaults to MiniMax-M2.7-highspeed,
  overridable via workflow_dispatch)
- New secret wire: E2E_MINIMAX_API_KEY ←
  secrets.MOLECULE_STAGING_MINIMAX_API_KEY
- Per-runtime missing-secret guard: claude-code requires MINIMAX,
  langgraph/hermes require OPENAI. Cron firing hard-fails on missing
  key for the active runtime; dispatch soft-skips so operators can
  ad-hoc test without setting up the secret first
- Operators can still pick langgraph/hermes via workflow_dispatch;
  the OpenAI fallback path stays wired

Script changes (tests/e2e/test_staging_full_saas.sh):
- SECRETS_JSON branches on which key is set:
    E2E_MINIMAX_API_KEY → {MINIMAX_API_KEY: <key>}  (claude-code path)
    E2E_OPENAI_API_KEY  → {OPENAI_API_KEY, HERMES_*, MODEL_PROVIDER}  (legacy)
  MiniMax wins when both are present — claude-code default canary
  must not accidentally consume the OpenAI key

Tests (new tests/e2e/test_secrets_dispatch.sh):
- 10 cases pinning the precedence + payload shape per branch
- Discipline check verified: 5 of 10 FAIL on a swapped if/elif
  (precedence inversion), all 10 PASS on the fix
- Anchors on the section-comment header so a structural refactor
  fails loudly rather than silently sourcing nothing

The model_slug dispatcher (lib/model_slug.sh) needs no change:
E2E_MODEL_SLUG override path is already wired (line 41), and
claude-code template's `minimax-` prefix matcher catches
"MiniMax-M2.7-highspeed" via lowercase-on-lookup.

Operator action required to land green:
- Set MOLECULE_STAGING_MINIMAX_API_KEY in repo secrets
  (Settings → Secrets and Variables → Actions). Use
  `gh secret set MOLECULE_STAGING_MINIMAX_API_KEY -R Molecule-AI/molecule-core`
  to avoid leaking the value into shell history.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 15:35:14 -07:00
Hongming Wang
ac6f65ab5e test(e2e): pin pick_model_slug behavior with bash unit tests
PR #2571 fixed synth-E2E by branching MODEL_SLUG per runtime, but only
the langgraph branch was verified at runtime — hermes / claude-code /
override / fallback had zero automated coverage. A future regression
(e.g. dropping the langgraph case) would silently revert and only
surface as "Could not resolve authentication method" mid-E2E.

This PR:
- Extracts the dispatch into tests/e2e/lib/model_slug.sh as a sourceable
  pick_model_slug() function. No behavior change.
- Adds tests/e2e/test_model_slug.sh — 9 assertions across all 5 dispatch
  branches plus the override path. Verified to FAIL when any branch is
  flipped (manually regressed langgraph slash-form to confirm the test
  catches it; restored before commit).
- Wires the unit test into ci.yml's existing shellcheck job (only runs
  when tests/e2e/ or scripts/ change). Pure-bash, no live infra.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 12:04:12 -07:00
Hongming Wang
80b38900de fix(auto-promote): skip empty-tree promotes to break perpetual cycle
The auto-promote ↔ auto-sync chain has been generating empty PRs
indefinitely since the staging merge_queue ruleset uses MERGE
strategy:

1. Auto-promote merges PR via queue → main = merge commit M2 not in staging
2. Auto-sync opens sync-back PR. Workflow's local `git merge --ff-only`
   succeeds (PR title even says "ff to ..."), but the queue lands the
   PR via MERGE → staging = merge commit S2 not in main
3. Auto-promote sees staging ahead by 1 → opens new promote PR. Tree
   diff vs main = 0 (S2's tree == main's tree). But the gate logic
   only checks "all required workflows green", not "actual code to
   ship" → opens an empty promote PR
4. ... repeat indefinitely

Each round costs ~30-40 min wallclock, ~2 manual approvals (the queue
requires 1 review and the bot can't self-approve without admin
bypass), and one full CodeQL Go run (~15 min).

Observed today (2026-05-03) across PRs #2592#2594#2595#2596#2597 — 5 PRs, ~3 hours, all empty content.

Fix: before opening the promote PR, check that staging's tree
actually differs from main's tree. If they're identical (the
empty-merge-commit cycle), skip cleanly and let the cycle terminate.

Implementation:
- New step `Skip if staging tree == main tree` runs before the
  existing gate check.
- `git diff --quiet origin/main $HEAD_SHA` exits 0 iff trees match.
- On match: emits a step summary explaining the skip + sets
  `skip=true`; subsequent gate-check + promote steps are gated on
  `skip != 'true'` so they short-circuit.
- Fail-open: if `git fetch` errors, fall through to gate check
  (preserve existing behavior). Only skip when diff is DEFINITIVELY
  empty.

Long-term, the cleaner fix is to switch the merge_queue ruleset's
merge_method away from MERGE so FF-able PRs land cleanly without a
new commit — but that's a broader change affecting every staging
PR's commit shape. This guard is the surgical one-step break.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 08:56:44 -07:00
Hongming Wang
2b3f44c3c8 fix(retarget): skip PRs whose head is staging (auto-promote PRs)
The retarget-main-to-staging workflow tries to PATCH base=staging on
every bot-authored PR opened against main. Auto-promote staging→main
PRs have head=staging, base=main — retargeting them sets head AND
base to staging, which GitHub rejects with HTTP 422 "no new commits
between base 'staging' and head 'staging'".

This started surfacing on PR #2588 (2026-05-03 14:30) once #2586
switched the auto-promote workflow to an App token. Before #2586
the auto-promote PR was authored by github-actions[bot], which the
retarget filter happened to skip; now it's molecule-ai[bot], which
passes the bot filter and triggers the broken retarget attempt.

Add a head-ref != 'staging' guard so auto-promote PRs short-circuit
before the PATCH. The existing 422 "duplicate base" detector is
left alone — it covers a different operational case.
2026-05-03 07:34:24 -07:00
Hongming Wang
bc11ed8a2b fix(auto-promote): use App token for auto-merge to fire downstream cascade (#2357)
GITHUB_TOKEN-initiated merges suppress the downstream `push` event on
main per GitHub's documented limitation:
  https://docs.github.com/en/actions/using-workflows/triggering-a-workflow#triggering-a-workflow-from-a-workflow

Result before this fix: every staging→main promote landed silently —
publish-workspace-server-image, canary-verify, and redeploy-tenants-on-main
all stayed dark. The polling tail was the SOLE cascade trigger; if it
ever 30-min-timed-out the chain dead-locked invisibly.

Symptom (from the issue body, 2026-04-30):

| Time     | Event                                            | Triggered? |
|----------|--------------------------------------------------|-----------|
| 05:48:04 | Promote PR #2352 merged (c140ad28)               | No fired  |
| 06:07:29 | Promote PR #2356 merged (5973c9bd)               | No fired  |

Fix: mint the molecule-ai App token BEFORE the promote-PR step and
hand it to the auto-merge call. App-token-initiated merges DO trigger
downstream workflow_run cascades.

The polling tail stays as defense-in-depth (with comments updated):
once we've observed >=10 successful natural cascades it can be dropped.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 07:13:26 -07:00
Hongming Wang
8f48a38550 fix(publish-runtime): re-add 5 templates wrongly removed from cascade (#2566)
The PR #2536 cascade prune ('deprecated, no shipping images') was
empirically wrong. Re-confirmed 2026-05-03:

- continuous-synth-e2e.yml defaults to langgraph as its primary canary
- All 5 'deprecated' templates have successful publish-image runs in
  the past 24h: langgraph, crewai, autogen, deepagents, gemini-cli

Symptom this fixes — issue #2566 (priority-high, failing 36+h):

  Synthetic E2E (staging): langgraph adapter A2A failure
  'Received Message object in task mode' — failing for >36h

Today at 11:06 commit e1628c4 fixed the underlying a2a-sdk strict-mode
issue in workspace/a2a_executor.py. publish-runtime fired at 11:13 and
cascaded — but only to claude-code, hermes, openclaw, codex. langgraph
was excluded by the prune, so its image stayed on the broken runtime
and the synth E2E (which defaults to langgraph) kept failing despite
the fix being live in PyPI.

After this lands + the next runtime publish fires, langgraph image
re-bakes with the fix and synth-E2E goes green.

Test plan:

- [x] yaml-validate the workflow
- [ ] After merge, watch publish-runtime cascade to all 9 templates
- [ ] Confirm langgraph publish-image fires + succeeds
- [ ] Confirm next continuous-synth-e2e run goes green

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 05:41:53 -07:00
Hongming Wang
60a516bc8d ci(redeploy): fix stale canary_slug default 'hongmingwang' → 'hongming'
The workflow_dispatch input default and the workflow_run env fallback
both pointed at 'hongmingwang', which doesn't match any current prod
tenant (slugs are: hongming, chloe-dong, reno-stars). CP silently
skipped the missing canary and put every tenant in batch-1 in parallel,
defeating the canary-first soak gate that exists to catch image-boot
regressions before they hit the whole fleet.

Concrete example from today's c0838d6 redeploy at 11:53Z (run 25278434388):
the dispatched body was `{"target_tag":"staging-c0838d6","canary_slug":"hongmingwang",...}`
and the CP response showed all 3 tenants in `"phase":"batch-1"` — no
soak, no canary. The deploy happened to be safe, but a broken image
would have hit hongming + chloe-dong + reno-stars simultaneously.

Fixed in three places: the runtime ordering comment, the
workflow_dispatch default, and the env fallback used by the
workflow_run trigger. Comment documents the rationale so the next
slug rename doesn't silently regress this again.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 05:06:01 -07:00
Hongming Wang
5e46ea70d6 ci(synth-e2e): wire MOLECULE_STAGING_OPENAI_KEY into provisioned tenant
The synth-E2E (#2342) provisions a langgraph tenant whose default
model `openai:gpt-4.1-mini` requires OPENAI_API_KEY for the first LLM
call. Sibling workflows already wire this:
- e2e-staging-saas.yml:89
- canary-staging.yml:63

continuous-synth-e2e.yml just forgot. Result: tenant boots, accepts
a2a messages, then returns:

  Agent error: "Could not resolve authentication method. Expected
  either api_key or auth_token to be set."

This was masked since 2026-04-29 (workflow creation) by a2a-sdk v0→v1
contract violations — PR #2558 (Task-enqueue) and #2563
(TaskUpdater.complete/failed terminal events) cleared those, exposing
the underlying auth gap on the synth-E2E firing at 11:39 UTC today.

The script tests/e2e/test_staging_full_saas.sh:325 already reads
E2E_OPENAI_API_KEY and persists it as a workspace_secret on tenant
create — only the workflow wiring was missing.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 04:43:07 -07:00
Hongming Wang
596e797dca ci(deploy): broaden ephemeral-prefix matchers to cover rt-e2e-*
The redeploy-tenants-on-staging soft-warn filter and the
sweep-stale-e2e-orgs janitor both hardcoded `^e2e-` to identify
ephemeral test tenants. Runtime-test harness fixtures (RFC #2251)
mint slugs prefixed with `rt-e2e-`, which neither matcher recognized.

Concrete impact observed today:
  - Two `rt-e2e-v{5,6}-*` tenants left orphaned 8h on staging
    (sweep-stale-e2e-orgs ignored them).
  - On the next staging redeploy their phantom EC2s returned
    `InvalidInstanceId: Instances not in a valid state for account`
    from SSM SendCommand → CP returned HTTP 500 + ok=false.
  - The redeploy soft-warn missed them too, so the workflow went
    red, which broke the auto-promote-staging chain feeding the
    canvas warm-paper rollout to prod.

Fix: switch both matchers to recognize the alternation
`^(e2e-|rt-e2e-)`. Long-lived prefixes (demo-prep, dryrun-*, dryrun2-*)
remain non-ephemeral and continue to hard-fail. Comment documents
the source-of-truth list and the cross-file invariant.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 04:28:29 -07:00
Hongming Wang
09010212a0 feat(ci): structural drift gate for cascade list vs manifest (RFC #388 PR-3)
Closes the recurrence path of PR #2556. The data fix realigned 8→4
templates in publish-runtime.yml's TEMPLATES variable, but the
underlying drift hazard was unguarded — the next manifest change
could silently leave cascade out of sync again.

This gate fails any PR that changes manifest.json or
publish-runtime.yml in a way that makes the cascade list diverge
from manifest workspace_templates (suffix-stripped). Either
direction is caught:

  missing-from-cascade  templates that won't auto-rebuild on a new
                       wheel publish (the codex-stuck-on-stale-runtime
                       bug class — PR #2512 added codex to manifest,
                       cascade wasn't updated, codex stayed pinned to
                       its last-built runtime version for weeks).

  extra-in-cascade     cascade dispatches to deprecated templates
                       (the wasted-API-calls + dead-CI-noise class —
                       PR #2536 pruned 5 templates from manifest;
                       cascade kept dispatching to all 8 until
                       PR #2556).

Triggers narrowly: only on PRs that touch manifest.json,
publish-runtime.yml, or the script itself. Fast (single grep+sed+comm
pipeline, no Go build).

Surfaced during the RFC #388 prior-art audit; folded in as the
structural follow-up to the data fix #2556 promised.

Self-tested both failure modes locally before commit:
  - Drop codex from cascade → script fails with "MISSING: codex"
  - Add langgraph to cascade → script fails with "EXTRA: langgraph"

Refs: https://github.com/Molecule-AI/molecule-controlplane/issues/388

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 03:52:39 -07:00
Hongming Wang
e014d22ee9
Merge pull request #2557 from Molecule-AI/feat/sweep-aws-secrets-orphans
feat(ops): sweep orphan AWS Secrets Manager secrets
2026-05-03 09:48:59 +00:00
Hongming Wang
6f8f7932d2 feat(ops): add sweep-aws-secrets janitor — orphan tenant bootstrap secrets
CP's deprovision flow calls Secrets.DeleteSecret() (provisioner/ec2.go:806)
but only when the deprovision runs to completion. Crashed provisions and
incomplete teardowns leak the per-tenant `molecule/tenant/<org_id>/bootstrap`
secret. At ~$0.40/secret/month, ~45 leaked secrets surfaced as ~$19/month
on the AWS cost dashboard.

The tenant_resources audit table (mig 024) tracks four kinds today —
CloudflareTunnel, CloudflareDNS, EC2Instance, SecurityGroup — and the
existing reconciler doesn't catch Secrets Manager orphans. The proper fix
(KindSecretsManagerSecret + recorder hook + reconciler enumerator) is filed
as a follow-up controlplane issue. This sweeper is the immediate stopgap.

Parallel-shape to sweep-cf-tunnels.sh:
  - Hourly schedule offset (:30, between sweep-cf-orphans :15 and
    sweep-cf-tunnels :45) so the three janitors don't burst CP admin
    at the same minute.
  - 24h grace window — never deletes a secret younger than the
    provisioning roundtrip, so an in-flight provision can't be racemurdered.
  - MAX_DELETE_PCT=50 default (mirrors sweep-cf-orphans for durable
    resources; tenant secrets should track 1:1 with live tenants).
  - Same schedule-vs-dispatch hardening as the other janitors:
    schedule → hard-fail on missing secrets, dispatch → soft-skip.
  - 8-way xargs parallelism, dry-run by default, --execute to delete.

Requires a dedicated AWS_JANITOR_* IAM principal — the prod molecule-cp
principal lacks secretsmanager:ListSecrets (it only has scoped
Get/Create/Update/Delete). The workflow's verify-secrets step will hard-fail
on the first scheduled run until those secrets are configured, surfacing
the missing setup loudly rather than silently no-op'ing.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 02:38:08 -07:00
Hongming Wang
24276b9458 fix(publish-runtime): align cascade list to 4 supported runtimes
The cascade `TEMPLATES` list in publish-runtime.yml had drifted from
manifest.json:

  Currently dispatches to: claude-code, langgraph, crewai, autogen,
                           deepagents, hermes, gemini-cli, openclaw
  manifest.json supports:  claude-code, hermes, openclaw, codex (after
                           PR #2536 pruned to 4 actively-supported)

Two consequences of the drift:

1. `codex` (added in PR #2512, supported in manifest) was never in the
   cascade — fresh runtime publishes did NOT trigger a codex template
   rebuild. Codex stayed pinned to whatever runtime version it last saw
   at its own image-build time.

2. langgraph/crewai/autogen/deepagents/gemini-cli — deprecated, no
   shipping images, no working A2A — were still receiving cascade
   dispatches. Wasted API calls and (worse) green CI on dead repos
   masks "this template is dead, stop maintaining it."

Now matches manifest.json workspace_templates exactly. Surfaced during
RFC #388 (fast workspace provision) prior-art audit.

Long-term fix is to derive TEMPLATES from manifest.json so this can't
drift again — captured as a Phase-1 invariant in RFC #388. This commit
is the data fix only; structural fix lands with the bake pipeline.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-03 02:28:15 -07:00
Hongming Wang
fd5fe34f69
Merge pull request #2523 from Molecule-AI/dependabot/github_actions/actions/github-script-9.0.0
chore(deps)(deps): bump actions/github-script from 7.1.0 to 9.0.0
2026-05-03 01:37:00 +00:00
Hongming Wang
0d8b0c37a6
Merge pull request #2521 from Molecule-AI/dependabot/github_actions/actions/checkout-6
chore(deps)(deps): bump actions/checkout from 4 to 6
2026-05-03 01:36:57 +00:00
Hongming Wang
252e126207
Merge pull request #2522 from Molecule-AI/dependabot/github_actions/docker/setup-buildx-action-4.0.0
chore(deps)(deps): bump docker/setup-buildx-action from 3.12.0 to 4.0.0
2026-05-03 01:27:03 +00:00
Hongming Wang
e84df73e96
Merge pull request #2528 from Molecule-AI/dependabot/github_actions/docker/build-push-action-7.1.0
chore(deps)(deps): bump docker/build-push-action from 6.19.2 to 7.1.0
2026-05-03 01:27:00 +00:00
dependabot[bot]
c46db97ac6
chore(deps)(deps): bump docker/build-push-action from 6.19.2 to 7.1.0
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 6.19.2 to 7.1.0.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](10e90e3645...bcafcacb16)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-version: 7.1.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-05-02 19:23:17 +00:00
dependabot[bot]
6c6c6eb1e8
chore(deps)(deps): bump imjasonh/setup-crane from 0.4 to 0.5
Bumps [imjasonh/setup-crane](https://github.com/imjasonh/setup-crane) from 0.4 to 0.5.
- [Release notes](https://github.com/imjasonh/setup-crane/releases)
- [Commits](31b88efe9d...6da1ae0188)

---
updated-dependencies:
- dependency-name: imjasonh/setup-crane
  dependency-version: '0.5'
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-05-02 19:23:13 +00:00
dependabot[bot]
e1f7d49575
chore(deps)(deps): bump actions/github-script from 7.1.0 to 9.0.0
Bumps [actions/github-script](https://github.com/actions/github-script) from 7.1.0 to 9.0.0.
- [Release notes](https://github.com/actions/github-script/releases)
- [Commits](f28e40c7f3...3a2844b7e9)

---
updated-dependencies:
- dependency-name: actions/github-script
  dependency-version: 9.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-05-02 19:23:09 +00:00
dependabot[bot]
ab7ac2e103
chore(deps)(deps): bump docker/setup-buildx-action from 3.12.0 to 4.0.0
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 3.12.0 to 4.0.0.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](8d2750c68a...4d04d5d948)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-version: 4.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-05-02 19:23:05 +00:00
dependabot[bot]
3598eb41d1
chore(deps)(deps): bump actions/checkout from 4 to 6
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 6.
- [Release notes](https://github.com/actions/checkout/releases)
- [Commits](https://github.com/actions/checkout/compare/v4...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-05-02 19:23:01 +00:00
Hongming Wang
8bf29b7d0e fix(sweep-cf-tunnels): parallelize deletes + raise workflow timeout
The hourly Sweep stale Cloudflare Tunnels job got cancelled mid-cleanup
on 2026-05-02 (run 25248788312, killed at 5min after deleting 424/672
stale tunnels). A second manual dispatch finished the remaining 254
fine, so the immediate backlog cleared, but two underlying bugs would
re-trip on the next big cleanup.

Bug 1: serial delete loop. The execute branch was a `while read; do
curl -X DELETE; done` pipeline at ~0.7s/tunnel — fine for the
steady-state cleanup of a handful, but a 600+ backlog needs ~7-8min.
This commit fans out to $SWEEP_CONCURRENCY (default 8) workers via
`xargs -P 8 -L 1 -I {} bash -c '...' _ {} < "$DELETE_PLAN"`. With 8x
parallelism the same 600+ list drains in ~60s. Notes:

  - We use stdin (`<`) not GNU's `xargs -a FILE` so the script stays
    portable to BSD xargs (matters for local-runner testing on macOS).
  - We pass ONLY the tunnel id on argv. xargs tokenizes on whitespace
    by default; tab-separating id+name on argv risks mangling. The
    name is kept in a side-channel id->name map ($NAME_MAP) and looked
    up by the worker only on failure, for FAIL_LOG readability.
  - Workers print exactly `OK` or `FAIL` on stdout; tally with
    `grep -c '^OK$' / '^FAIL$'`.
  - On non-zero FAILED, log the first 20 lines of $FAIL_LOG as
    "Failure detail (first 20):" — same diagnostic surface as before
    but consolidated so we don't spam logs on a flaky CF API.

Bug 2: workflow's 5-min cap was set as a hangs-detector but turned out
to be a real-job-too-slow detector. Raised to 30 min — generous
headroom for the ~60s steady-state run while still surfacing genuine
hangs (and in line with the sweep-cf-orphans companion job).

Bug 3 (drive-by): the existing trap was `trap 'rm -rf "$PAGES_DIR"'
EXIT`, which would have been silently overwritten by any later trap
registration. Replaced with a single `cleanup()` function that wipes
PAGES_DIR + all four new tempfiles (DELETE_PLAN, NAME_MAP, FAIL_LOG,
RESULT_LOG), called once via `trap cleanup EXIT`.

Verification:
  - bash -n scripts/ops/sweep-cf-tunnels.sh: clean
  - shellcheck -S warning scripts/ops/sweep-cf-tunnels.sh: clean
  - python3 yaml.safe_load on the workflow: clean
  - Synthetic 30-line delete plan with every 7th id sentinel'd to
    return {"success":false}: TEST PASS, DELETED=26 FAILED=4, FAIL_LOG
    side-channel name lookup verified.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-02 02:35:46 -07:00
Hongming Wang
6e0eb2ddc9 fix(redeploy-staging): tolerate e2e-* teardown race in fleet HTTP 500
Recurring failure pattern in redeploy-tenants-on-staging:

  ##[error]redeploy-fleet returned HTTP 500
  ##[error]Process completed with exit code 1.

with the per-tenant breakdown in the response body showing the failures
were on ephemeral e2e-* tenants (saas/canvas/ext) whose parent E2E run
torn them down mid-redeploy — SSM exit=2 because the EC2 was already
terminating, or healthz timeout because the CF tunnel was already gone.
The actual operator-facing tenants (dryrun-98407, demo-prep, etc) all
rolled fine in the same call.

This shape repeats every staging push that overlaps an active E2E run.
The downstream `Verify each staging tenant /buildinfo matches published
SHA` step ALREADY distinguishes STALE vs UNREACHABLE for exactly this
reason (per #2402); only the top-level `if HTTP_CODE != 200; exit 1`
gate misclassifies the race.

Filter: HTTP 500 + every failed slug matches `^e2e-` → soft-warn and
fall through to verify. Any non-e2e-* failure or non-500 HTTP remains
a hard fail, with the failed non-e2e slugs surfaced in the error so
the operator doesn't have to dig the response body out of CI.

Verified the gate logic with 6 synthetic CP responses (happy / e2e-only
race / mixed real+e2e fail / non-200 / 200+ok=false / all-real-fail) —
all behave correctly.

prod's redeploy-tenants-on-main is intentionally NOT touched: prod CP
serves no e2e-* tenants, so the race can't occur there and the strict
gate is the right behavior.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-02 02:17:36 -07:00
Hongming Wang
43c234df35 secret-scan: align local pre-commit + extend drift lint (closes #1569 root)
#1569 Phase 1 discovery (2026-05-02) found six historical credential
exposures in molecule-core git history. All confirmed dead — but the
reason they got committed in the first place was that the local
pre-commit hook had two gaps that the canonical CI gate (and the
runtime's hook) didn't:

  1. **Pattern set was incomplete.** Local hook checked
     `sk-ant-|sk-proj-|ghp_|gho_|AKIA|mol_pk_|cfut_` — missing
     `ghs_*`, `ghu_*`, `ghr_*`, `github_pat_*`, `sk-svcacct-`,
     `sk-cp-`, `xox[baprs]-`, `ASIA*`. The historical leaks were 5×
     `ghs_*` (App installation tokens) + 1× `github_pat_*` — none of
     which the local hook would have caught even if it ran.
  2. **`*.md` and `docs/` were skip-listed.** The leaked tokens lived
     in `tick-reflections-temp.md`, `qa-audit-2026-04-21.md`, and
     `docs/incidents/INCIDENT_LOG.md` — exactly the file types the
     skip-list excluded. The hook ran and silently passed.

This commit:

- Replaces the local hook's hard-coded inline regex with the canonical
  13-pattern array (byte-aligned with `.github/workflows/secret-scan.yml`
  and the workspace runtime's `pre-commit-checks.sh`).
- Removes the `\.md$|docs/` skip — keeps only binary, lockfile, and
  hook-self exclusions.
- Adds the local hook to `lint_secret_pattern_drift.py` as an in-repo
  consumer (read-from-disk, no network — the hook lives in the same
  checkout the lint runs against). Drift now fails the lint when
  canonical changes without the local hook updating in lockstep.
- Adds `.githooks/pre-commit` to the drift-lint workflow's path
  filter so consumer-side edits also trigger the lint.
- Adopts the canonical's "don't echo the matched value" defense (the
  prior version would have round-tripped a leaked credential into
  scrollback / CI logs).

Verified: `python3 .github/scripts/lint_secret_pattern_drift.py`
reports both consumers aligned at 13 patterns. The hook's existing
six other gates (canvas 'use client', dark theme, SQL injection,
go-build, etc.) are untouched.

Companion change (already applied via API, no diff here):
`Scan diff for credential-shaped strings` is now in the required-checks
list on both `staging` and `main` branch protection — was previously a
soft gate (workflow ran, exited 1, but didn't block merge).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 23:47:56 -07:00
Hongming Wang
115f1f5e64 fix(redeploy-main): pull staging-<head_sha> instead of stale :latest
Auto-trigger from publish-workspace-server-image now resolves
target_tag to the just-published `staging-<short_head_sha>` digest
instead of `:latest`. Bypasses the dead retag path that was leaving
prod tenants on a 4-day-old image.

The chain pre-fix:
  publish-image  → pushes :staging-<sha> + :staging-latest (NOT :latest)
  canary-verify  → soft-skips (CANARY_TENANT_URLS unset, fleet not stood up)
  promote-latest → manual workflow_dispatch only, last run 2026-04-28
  redeploy-main  → pulls :latest → 2026-04-28 digest → all 3 tenants STALE

Today's incident:
  e7375348 (main) → publish-image green → redeploy fired → tenants
  pulled :latest (76c604fb digest from prior canary-verified state) →
  hongming /buildinfo returned 76c604fb instead of e7375348 → verify
  step correctly flagged 3/3 STALE → workflow failed.

Today's PRs (#2473 smoke wedge, #2487 panic recovery, #2496 sweeper
followups) shipped to GHCR as :staging-<sha> but never reached prod.

Fix:
  - workflow_dispatch input default '' (was 'latest'); empty input
    triggers auto-compute path
  - new "Compute target tag" step resolves:
    1. operator-supplied input → verbatim (rollback / pin)
    2. else → staging-<short_head_sha> (auto)
  - verify step's operator-pin detection now allows
    staging-<short_head_sha> as a non-pin (verification still runs)

When canary fleet is real, this workflow should chain on
canary-verify completion (workflow_run from canary-verify, gated on
promote-to-latest success) instead of publish-image — separate,
smaller PR. Today's fix unblocks prod deploys without that
prerequisite.

Companion: promote-latest.yml dispatched 2026-05-02 against
e7375348 to unstick existing prod tenants. This PR prevents
recurrence.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 23:17:59 -07:00
Hongming Wang
3d8a0a58fa ci(auto-sync): App-token dispatch + ubuntu-latest + workflow_dispatch
auto-sync-main-to-staging.yml hasn't fired since 2026-04-29 despite
multiple staging→main promotes since. The promote PR #2442 (Phase 2)
has been wedged on `mergeStateStatus: BEHIND` for hours because
staging is missing the merge commit from PR #2437.

Three compounding bugs, all fixed here:

1. **GitHub no-recursion suppresses the `on: push` trigger.**
   When the merge queue lands a staging→main promote, the resulting
   push to main is "by GITHUB_TOKEN", and per
   https://docs.github.com/en/actions/using-workflows/triggering-a-workflow#triggering-a-workflow-from-a-workflow
   that push event does NOT fire any downstream workflows. Verified
   empirically against SHA 76c604fb (PR #2437): exactly ONE workflow
   fired on that push — `publish-workspace-server-image`, dispatched
   explicitly by auto-promote-staging.yml's polling tail with an App
   token (the documented #2357 workaround). Every other `on: push`
   workflow on main, including auto-sync, was silently suppressed.

   Same fix extended here: auto-promote-staging.yml's polling tail
   now ALSO dispatches `auto-sync-main-to-staging.yml --ref main`
   via the App token after the merge lands. App-initiated dispatch
   propagates `workflow_run` cascades, which is what the publish
   tail relies on too. Failure path: emits `::error::` with the
   recovery command — operator runs it once and the next promote
   self-heals.

   auto-sync.yml gains `workflow_dispatch:` so it can be invoked
   from the dispatch above + manually if a future promote also
   misses (defense in depth).

2. **`runs-on: [self-hosted, macos, arm64]` was wrong for this repo.**
   Comment claimed "matches the rest of this repo's workflows" — false:
   this is the ONLY workflow in molecule-core/.github/workflows/ with
   a non-ubuntu runs-on. Copy-paste artefact from molecule-controlplane
   (which IS private and has a Mac runner). molecule-core has no Mac
   runner registered, so even when the trigger DID fire (the 3 historic
   manual-UI merges), the job would have sat unassigned if the runner
   were offline. Switched to `ubuntu-latest` to match every other
   workflow in this repo.

3. **The `on: push` trigger remains** as a defense-in-depth path for
   the rare case of a manual UI merge by a real user (which uses
   their PAT and DOES fire downstream workflows — confirmed via the
   2026-04-29 d35a2420 run with `triggering_actor=HongmingWang-Rabbit`
   that fired 16 workflows including auto-sync). Belt-and-suspenders.

Long-term: switching auto-promote's `gh pr merge --auto` call to use
the App token (instead of GITHUB_TOKEN) would let `on: push` triggers
fire naturally and obviate the need for the explicit dispatches in
the polling tail. Tracked in #2357 — out of scope here.

Operator recovery for the current Phase 2 wedge: after this lands on
staging, dispatch auto-sync once via
`gh workflow run auto-sync-main-to-staging.yml --ref main` to
backfill the missed sync from 76c604fb. PR #2442 will go from
BEHIND → CLEAN and auto-merge.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 22:28:35 -07:00
Hongming Wang
c275716005 harness(phase-2): multi-tenant compose + cross-tenant isolation replays
Brings the local harness from "single tenant covering the request path"
to "two tenants covering both the request path AND the per-tenant
isolation boundary" — the same shape production runs (one EC2 + one
Postgres + one MOLECULE_ORG_ID per tenant).

Why this matters: the four prior replays exercise the SaaS request
path against one tenant. They cannot prove that TenantGuard rejects
a misrouted request (production CF tunnel + AWS LB are the failure
surface), nor that two tenants doing legitimate work in parallel
keep their `activity_logs` / `workspaces` / connection-pool state
partitioned. Both are real bug classes — TenantGuard allowlist drift
shipped #2398, lib/pq prepared-statement cache collision is documented
as an org-wide hazard.

What changed:

1. compose.yml — split into two tenants.
   tenant-alpha + postgres-alpha + tenant-beta + postgres-beta + the
   shared cp-stub, redis, cf-proxy. Each tenant gets a distinct
   ADMIN_TOKEN + MOLECULE_ORG_ID and its own Postgres database. cf-proxy
   depends on both tenants becoming healthy.

2. cf-proxy/nginx.conf — Host-header → tenant routing.
   `map $host $tenant_upstream` resolves the right backend per request.
   Required `resolver 127.0.0.11 valid=30s ipv6=off;` because nginx
   needs an explicit DNS resolver to use a variable in `proxy_pass`
   (literal hostnames resolve once at startup; variables resolve per
   request — without the resolver nginx fails closed with 502).
   `server_name` lists both tenants + the legacy alias so unknown Host
   headers don't silently route to a default and mask routing bugs.

3. _curl.sh — per-tenant + cross-tenant-negative helpers.
   `curl_alpha_admin` / `curl_beta_admin` set the right
   Host + Authorization + X-Molecule-Org-Id triple.
   `curl_alpha_creds_at_beta` / `curl_beta_creds_at_alpha` exist
   precisely to make WRONG requests (replays use them to assert
   TenantGuard rejects). `psql_exec_alpha` / `psql_exec_beta` shell out
   per-tenant Postgres exec. Legacy aliases (`curl_admin`, `psql_exec`)
   keep the four pre-Phase-2 replays working without edits.

4. seed.sh — registers parent+child workspaces in BOTH tenants.
   Captures server-generated IDs via `jq -r '.id'` (POST /workspaces
   ignores body.id, so the older client-side mint silently desynced
   from the workspaces table and broke FK-dependent replays). Stashes
   `ALPHA_PARENT_ID` / `ALPHA_CHILD_ID` / `BETA_PARENT_ID` /
   `BETA_CHILD_ID` to .seed.env, plus legacy `ALPHA_ID` / `BETA_ID`
   aliases for backwards compat with chat-history / channel-envelope.

5. New replays.

   tenant-isolation.sh (13 assertions) — TenantGuard 404s any request
   whose X-Molecule-Org-Id doesn't match the container's
   MOLECULE_ORG_ID. Asserts the 404 body has zero
   tenant/org/forbidden/denied keywords (existence of a tenant must
   not be probable from the outside). Covers cross-tenant routing
   misconfigure + allowlist drift + missing-org-header.

   per-tenant-independence.sh (12 assertions) — both tenants seed
   activity_logs in parallel with distinct row counts (3 vs 5) and
   confirm each tenant's history endpoint returns exactly its own
   counts. Then a concurrent INSERT race (10 rows per tenant in
   parallel via `&` + wait) catches shared-pool corruption +
   prepared-statement cache poisoning + redis cross-keyspace bleed.

6. Bug fix: down.sh + dump-logs SECRETS_ENCRYPTION_KEY validation.
   `docker compose down -v` validates the entire compose file even
   though it doesn't read the env. up.sh generates a per-run key into
   its own shell — down.sh runs in a fresh shell that wouldn't see it,
   so without a placeholder `compose down` exited non-zero before
   removing volumes. Workspaces silently leaked into the next
   ./up.sh + seed.sh boot. Caught when tenant-isolation.sh F1/F2 saw
   3× duplicate alpha-parent rows accumulated across three prior runs.
   Same fix applied to the workflow's dump-logs step.

7. requirements.txt — pin molecule-ai-workspace-runtime>=0.1.78.
   channel-envelope-trust-boundary.sh imports from `molecule_runtime.*`
   (the wheel-rewritten path) so it catches the failure mode where
   the wheel build silently strips a fix that unit tests on local
   source still pass. CI was failing this replay because the wheel
   wasn't installed — caught in the staging push run from #2492.

8. .github/workflows/harness-replays.yml — Phase 2 plumbing.
   * Removed /etc/hosts step (Host-header path eliminated the need;
     scripts already source _curl.sh).
   * Updated dump-logs to reference the new service names
     (tenant-alpha + tenant-beta + postgres-alpha + postgres-beta).
   * Added SECRETS_ENCRYPTION_KEY placeholder env on the dump step.

Verified: ./run-all-replays.sh from a clean state — 6/6 passed
(buildinfo-stale-image, channel-envelope-trust-boundary, chat-history,
peer-discovery-404, per-tenant-independence, tenant-isolation).

Roadmap section updated: Phase 2 marked shipped. Phase 3 promoted to
"replace cp-stub with real molecule-controlplane Docker build + env
coherence lint."

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 21:36:40 -07:00
Hongming Wang
e58e446444 docs(ci): correct test-ops-scripts.yml header — discover does NOT recurse
The previous header said `unittest discover from the scripts/ root
walks recursively`, contradicting the workflow body which runs two
passes precisely because discover does NOT recurse without
__init__.py. Fixed self-review feedback on PR #2440.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 20:52:58 -07:00
Hongming Wang
f2545fcb57
Merge pull request #2440 from Molecule-AI/chore/wheel-rewriter-tests-and-noqa-cleanup
chore: rewriter unit tests + drop misleading noqa on import inbox
2026-05-01 03:48:33 +00:00