Commit Graph

3754 Commits

Author SHA1 Message Date
dependabot[bot]
3598eb41d1
chore(deps)(deps): bump actions/checkout from 4 to 6
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 6.
- [Release notes](https://github.com/actions/checkout/releases)
- [Commits](https://github.com/actions/checkout/compare/v4...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-05-02 19:23:01 +00:00
Hongming Wang
ea967d5787
Merge pull request #2518 from Molecule-AI/docs/hermes-plugin-status-update
docs(integrations): hermes plugin path status post-PR #32 merge
2026-05-02 16:01:47 +00:00
Hongming Wang
2dd5684e73 docs(integrations): update hermes plugin path status to post-merge
PR #32 (workspace template) merged 2026-05-02; image rebuild
succeeded. Plugin baked in. Local full-chain E2E green; caught + fixed
a real KeyError in upstream hermes_cli/tools_config.py. Upstream PR
#18775 still OPEN/CONFLICTING — not on critical path.

Also rewrites hermes-platform-plugins-upstream-pr.md to reflect the
final landing shape (existing hermes_cli/plugins.py, not a new
plugins/platforms/ system).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-02 04:42:00 -07:00
Hongming Wang
2552779d97
Merge pull request #2517 from Molecule-AI/test/all-runtimes-a2a-e2e-harness
test(e2e): unified A2A round-trip parity harness across all 4 runtimes
2026-05-02 11:40:14 +00:00
Hongming Wang
d88c160e56 test(e2e): wire SaaS auth headers (TENANT_ADMIN_TOKEN + TENANT_ORG_ID)
The harness needs Authorization + X-Molecule-Org-Id (per-tenant, NOT
CP_ADMIN_API_TOKEN) when targeting *.moleculesai.app subdomains.
Existing single-Origin-header form silent-failed with 404 against
staging tenants since the SaaS edge WAF rewrites unauthenticated
/workspaces calls to Next.js (per
reference_saas_waf_origin_header.md).

Switch to a headers array so multiple -H flags compose cleanly with
curl arg-quoting, and document the env var contract at the top of
the script.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-02 04:36:23 -07:00
Hongming Wang
5aaac7d2d9 test(e2e): unified A2A round-trip parity harness across all 4 runtimes
Adds two scripts:

  scripts/test-all-runtimes-a2a-e2e.sh
    Provisions one workspace per runtime (claude-code, hermes, codex,
    openclaw), sets provider keys, waits online, sends two A2A messages
    per workspace. First message validates round-trip; second message
    validates session continuity. Cleans up via trap on EXIT.

  scripts/test-hermes-plugin-e2e.sh
    Hermes-only variant focused on the plugin /a2a/inbound path.
    Proof-point: session continuity between turns (the plugin path's
    deliverable; old chat-completions path lost context per turn).

Both honor SKIP_<runtime> env vars for incremental testing and tolerate
the SaaS edge WAF Origin header requirement (per
reference_saas_waf_origin_header.md).

Run:
  PLATFORM=https://demo-tenant.staging.moleculesai.app \\
      ./scripts/test-all-runtimes-a2a-e2e.sh

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-02 04:36:23 -07:00
Hongming Wang
15dd1f26c3
Merge pull request #2513 from Molecule-AI/auto-sync/main-35cb6ba0
chore: sync main → staging (auto, merge 35cb6ba0)
2026-05-02 10:53:27 +00:00
Hongming Wang
8083fd8b7d
Merge branch 'staging' into auto-sync/main-35cb6ba0 2026-05-02 03:39:00 -07:00
Hongming Wang
1f77f41a80
Merge pull request #2514 from Molecule-AI/fix/honest-v1-tolerance-comments
docs(a2a): correct misleading v1-tolerance comments
2026-05-02 09:46:33 +00:00
Hongming Wang
119518a612
Merge pull request #2515 from Molecule-AI/fix/sweep-cf-tunnels-parallelize-deletes
fix(sweep-cf-tunnels): parallelize deletes + raise workflow timeout
2026-05-02 09:38:31 +00:00
Hongming Wang
8bf29b7d0e fix(sweep-cf-tunnels): parallelize deletes + raise workflow timeout
The hourly Sweep stale Cloudflare Tunnels job got cancelled mid-cleanup
on 2026-05-02 (run 25248788312, killed at 5min after deleting 424/672
stale tunnels). A second manual dispatch finished the remaining 254
fine, so the immediate backlog cleared, but two underlying bugs would
re-trip on the next big cleanup.

Bug 1: serial delete loop. The execute branch was a `while read; do
curl -X DELETE; done` pipeline at ~0.7s/tunnel — fine for the
steady-state cleanup of a handful, but a 600+ backlog needs ~7-8min.
This commit fans out to $SWEEP_CONCURRENCY (default 8) workers via
`xargs -P 8 -L 1 -I {} bash -c '...' _ {} < "$DELETE_PLAN"`. With 8x
parallelism the same 600+ list drains in ~60s. Notes:

  - We use stdin (`<`) not GNU's `xargs -a FILE` so the script stays
    portable to BSD xargs (matters for local-runner testing on macOS).
  - We pass ONLY the tunnel id on argv. xargs tokenizes on whitespace
    by default; tab-separating id+name on argv risks mangling. The
    name is kept in a side-channel id->name map ($NAME_MAP) and looked
    up by the worker only on failure, for FAIL_LOG readability.
  - Workers print exactly `OK` or `FAIL` on stdout; tally with
    `grep -c '^OK$' / '^FAIL$'`.
  - On non-zero FAILED, log the first 20 lines of $FAIL_LOG as
    "Failure detail (first 20):" — same diagnostic surface as before
    but consolidated so we don't spam logs on a flaky CF API.

Bug 2: workflow's 5-min cap was set as a hangs-detector but turned out
to be a real-job-too-slow detector. Raised to 30 min — generous
headroom for the ~60s steady-state run while still surfacing genuine
hangs (and in line with the sweep-cf-orphans companion job).

Bug 3 (drive-by): the existing trap was `trap 'rm -rf "$PAGES_DIR"'
EXIT`, which would have been silently overwritten by any later trap
registration. Replaced with a single `cleanup()` function that wipes
PAGES_DIR + all four new tempfiles (DELETE_PLAN, NAME_MAP, FAIL_LOG,
RESULT_LOG), called once via `trap cleanup EXIT`.

Verification:
  - bash -n scripts/ops/sweep-cf-tunnels.sh: clean
  - shellcheck -S warning scripts/ops/sweep-cf-tunnels.sh: clean
  - python3 yaml.safe_load on the workflow: clean
  - Synthetic 30-line delete plan with every 7th id sentinel'd to
    return {"success":false}: TEST PASS, DELETED=26 FAILED=4, FAIL_LOG
    side-channel name lookup verified.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-02 02:35:46 -07:00
Hongming Wang
fc33cf1131 docs(a2a): correct misleading v1-tolerance comments
Follow-up to PR #2509/#2510. The defensive v1-detection branches in
extract_attached_files (Python) and extractFilesFromTask (TypeScript)
were merged with comments claiming they fix a "v0→v1 silent-drop"
bug that surfaced as the 2026-05-01 hongming "no text content"
incident. Live test disproved that hypothesis: a2a-sdk's JSON-RPC
layer validates inbound requests against the v0 Pydantic union, so
v1 shapes are rejected at the request boundary — the v1 detection
branch is unreachable on the JSON-RPC ingress path. The actual root
cause of the hongming incident was the missing /workspace chown
fixed by CP PR #381 + test #382.

Update the comments to honestly describe these branches as
defensive future-proofing (kept against an eventual SDK schema
migration or in-process callers that construct Parts directly from
protobuf), not as fixes for an observed bug. Also trims
ChatTab.tsx's outbound-shape comment block from ~21 lines to a
3-line pointer to the SDK union.

Comment-only change. No behavior change. 86 workspace tests + 91
canvas tests still pass.
2026-05-02 02:33:00 -07:00
github-actions[bot]
03c1cbf12b chore: sync main → staging (auto) 2026-05-02 09:27:17 +00:00
Hongming Wang
35cb6ba089
Merge pull request #2512 from Molecule-AI/feat/register-codex-runtime
feat: register codex runtime + runtime native-MCP design docs
2026-05-02 02:26:56 -07:00
Hongming Wang
ce0188d5b4
Merge pull request #2499 from Molecule-AI/auto-sync/main-e7375348
chore: sync main → staging (auto, ff to e7375348)
2026-05-02 09:22:51 +00:00
7224276de0 feat: register codex runtime + runtime native-MCP design docs
Adds the OpenAI Codex CLI as a Molecule workspace runtime and lands
the design docs that drove the runtime native-MCP push parity work
across claude-code, hermes, openclaw, and codex.

manifest.json:
- Adds `codex` workspace_template entry pointing at the new
  Molecule-AI/molecule-ai-workspace-template-codex repo (initial
  commit landed there in parallel; 14 files / 1411 LOC). The
  workspace-server runtime registry already had `codex` in its
  fallback set — this entry makes it manifest-reachable in prod.

docs/integrations/:
- runtime-native-mcp-status.md — index across all four runtime streams
- codex-app-server-adapter-design.md — full design including v2 RPC
  sequence, executor skeleton, schema-vs-runtime drift findings
  (real codex 0.72 returns thread.id, schema says thread.threadId)
- hermes-platform-plugins-upstream-pr.md — pre-submission draft of
  the hermes-agent upstream PR

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-02 02:21:11 -07:00
Hongming Wang
3d7b4b70ff
Merge pull request #2511 from Molecule-AI/fix/redeploy-tolerate-e2e-teardown-race
fix(redeploy-staging): tolerate e2e-* teardown race in fleet HTTP 500
2026-05-02 09:19:45 +00:00
Hongming Wang
6e0eb2ddc9 fix(redeploy-staging): tolerate e2e-* teardown race in fleet HTTP 500
Recurring failure pattern in redeploy-tenants-on-staging:

  ##[error]redeploy-fleet returned HTTP 500
  ##[error]Process completed with exit code 1.

with the per-tenant breakdown in the response body showing the failures
were on ephemeral e2e-* tenants (saas/canvas/ext) whose parent E2E run
torn them down mid-redeploy — SSM exit=2 because the EC2 was already
terminating, or healthz timeout because the CF tunnel was already gone.
The actual operator-facing tenants (dryrun-98407, demo-prep, etc) all
rolled fine in the same call.

This shape repeats every staging push that overlaps an active E2E run.
The downstream `Verify each staging tenant /buildinfo matches published
SHA` step ALREADY distinguishes STALE vs UNREACHABLE for exactly this
reason (per #2402); only the top-level `if HTTP_CODE != 200; exit 1`
gate misclassifies the race.

Filter: HTTP 500 + every failed slug matches `^e2e-` → soft-warn and
fall through to verify. Any non-e2e-* failure or non-500 HTTP remains
a hard fail, with the failed non-e2e slugs surfaced in the error so
the operator doesn't have to dig the response body out of CI.

Verified the gate logic with 6 synthetic CP responses (happy / e2e-only
race / mixed real+e2e fail / non-200 / 200+ok=false / all-real-fail) —
all behave correctly.

prod's redeploy-tenants-on-main is intentionally NOT touched: prod CP
serves no e2e-* tenants, so the race can't occur there and the strict
gate is the right behavior.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-02 02:17:36 -07:00
Hongming Wang
1ce9b7f716
Merge pull request #2510 from Molecule-AI/fix/revert-canvas-v1-outbound
fix(canvas): revert v1 outbound file part shape — JSON-RPC layer rejects it
2026-05-02 08:43:53 +00:00
Hongming Wang
3ce7c11a13 fix(canvas): revert v1 outbound file part shape
The previous PR (#2509) flipped canvas outbound file parts to the v1
flat shape `{url, filename, mediaType}` based on a hypothesis that
a2a-sdk's JSON-RPC parser silently dropped v0 `{kind:"file", file:{...}}`
shapes. Live test shows the opposite: a2a-sdk's JSON-RPC layer
validates against the v0 Pydantic discriminated union (TextPart |
FilePart | DataPart), so v1 flat shape is rejected with:

    Invalid Request:
      params.message.parts.0.TextPart.text — Field required
      params.message.parts.0.FilePart.file — Field required
      params.message.parts.0.DataPart.data — Field required

The actual root cause of the user-visible "Error: message contained
no text content" was the missing `/workspace` chown (CP PR #381 +
test pin #382), not a wire-shape mismatch. Verified end-to-end by
sending a v0 image-only message after PR #381 + workspace re-provision
— agent receives the file, reads its bytes, and replies normally.

Reverting only the canvas outbound shape. Defensive v1-tolerance
stays in:
  - workspace/executor_helpers.py — extract_attached_files still
    accepts v1 protobuf parts in case a future client emits them or
    a future SDK release flips internal representation. Harmless on
    the v0 hot path.
  - canvas/message-parser.ts — extractFilesFromTask still tolerates
    v1 shape on incoming agent responses. Some agents may emit v1
    when their internal serializer round-trips through protobuf.

Tests stay green (91 canvas, 86 workspace).
2026-05-02 01:31:56 -07:00
Hongming Wang
bf83af0960
Merge pull request #2509 from Molecule-AI/fix/a2a-v1-file-part-shape
fix(a2a): send v1 file Part shape; tolerate v1 server-side
2026-05-02 08:13:52 +00:00
Hongming Wang
02a8841402 fix(a2a): send v1 file Part shape; tolerate v1 server-side
Image-only chats surface "Error: message contained no text content"
because canvas posts v0 `{kind:"file", file:{uri,name,mimeType}}` shapes
that the workspace runtime's a2a-sdk v1 protobuf parser silently drops:
v1 `Part` has fields `[text, raw, url, data, metadata, filename,
media_type]` and `ignore_unknown_fields=True` discards `kind`+`file`,
producing a fully-empty Part. With no text and no extracted file
attachments, the executor's "no text content" guard fires.

Three coordinated changes close the gap:

1. canvas/ChatTab.tsx — outbound file parts now carry the v1 flat
   shape `{url, filename, mediaType}` so the v1 protobuf parser
   populates Part fields instead of dropping them.
2. workspace/executor_helpers.py — extract_attached_files learns the
   v1 detection branch (non-empty `part.url` + `filename` +
   `media_type`) alongside the existing v0 RootModel and flat-file
   shapes. Defends every runtime that mounts the OSS wheel against
   the same drop, including any pre-fix client still on the wire.
3. canvas/message-parser.ts — extractFilesFromTask tolerates the v1
   shape on incoming agent responses too, so file chips render in
   chat history regardless of which Part shape the runtime emits.

Test pins:
- workspace/tests/test_executor_helpers.py:
  + v1 protobuf shape extraction
  + empty-Part defense (v0→v1 silent-drop fall-through returns [])
- canvas message-parser test:
  + v1 protobuf flat parts
  + filename fallback to URL basename for v1
2026-05-02 00:58:05 -07:00
Hongming Wang
b36eed97f6
Merge pull request #2508 from Molecule-AI/fix/sweep-cf-tunnels-arg-too-long
fix(sweep-cf-tunnels): buffer pages to disk to avoid argv ARG_MAX
2026-05-02 07:45:01 +00:00
Hongming Wang
a117a60eed fix(sweep-cf-tunnels): buffer pages to disk to avoid argv ARG_MAX
The page-merge loop passed the entire accumulating tunnel JSON to
python3 -c via argv on every iteration. On a busy account (verified
2026-05-02: 672 tunnels, 14 pages on Hongmingwangrabbit account) this
exceeds the GH Ubuntu runner's combined argv+envp limit (~128 KB) and
dies with `python3: Argument list too long` at exit 126 — the workflow
has been silently failing this way since the very first run that hit a
real account, masked earlier by a missing-CF_ACCOUNT_ID secret check.

Buffer each page response to a file under a temp dir, merge from disk
at the end. Also bumps the page cap from 20 to 40 (1000 → 2000 tunnel
ceiling) so the existing soft-cap warning has headroom; the disk-merge
shape is O(n) in tunnel count rather than the previous O(n^2) so the
larger ceiling is cheap.

Verified locally against the live account (672 tunnels): script now
runs cleanly to the existing MAX_DELETE_PCT safety gate, which trips
at 99% > 90% as designed and surfaces the actual orphan backlog for
operator-driven cleanup.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-02 00:42:25 -07:00
Hongming Wang
cdbf54beed
Merge pull request #2507 from Molecule-AI/fix/canary-prompt-explicit-echo
fix(canary): reframe smoke prompt to give GPT-4o explicit permission to echo
2026-05-02 06:55:39 +00:00
Hongming Wang
fa9e29f2f5 fix(canary): reframe smoke prompt to give GPT-4o explicit permission to echo
Canary started flaking 2026-05-01 22:11 with model-refusal replies:
  - "I'm unable to do that."
  - "I'm unable to fulfill that request. Can I assist you with anything else?"
  - "I'm unable to reply with responses that don't allow me to fulfill tasks…"
3 fails / 10 recent runs ≈ 30% flake.

Trigger: 2026-04-30's Platform Capabilities preamble (#2332) added the
directive "Use them proactively" to the top of every system prompt.
Combined with the heavy A2A + HMA tool docs further down, the model
reads the contrived bare-echo prompt ("Reply with exactly: PONG") as
out-of-role and intermittently refuses.

Real user prompts don't hit this — only the synthetic smoke prompt does,
so the right fix is in the canary's prompt phrasing, not the platform's
system prompt (which is correctly priming agents toward tool use). New
phrasing explicitly tells the model "this is a smoke test" and "no
tools or memory are needed" so it has permission to comply.

Also updates the child workspace's CHILD_PONG prompt with the same
framing — same failure mode would have hit it once full-mode runs again.

No code change to system prompt, no test infra change. Just two prompt
strings + a load-bearing comment so future readers don't trim back to
the brittle phrasing.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 23:53:24 -07:00
Hongming Wang
12807962d2
Merge pull request #2506 from Molecule-AI/ci/secret-scan-required-and-precommit
secret-scan: align local pre-commit + extend drift lint (closes #1569 root)
2026-05-02 06:52:28 +00:00
Hongming Wang
0d25922f91
Merge branch 'staging' into ci/secret-scan-required-and-precommit 2026-05-01 23:48:50 -07:00
Hongming Wang
43c234df35 secret-scan: align local pre-commit + extend drift lint (closes #1569 root)
#1569 Phase 1 discovery (2026-05-02) found six historical credential
exposures in molecule-core git history. All confirmed dead — but the
reason they got committed in the first place was that the local
pre-commit hook had two gaps that the canonical CI gate (and the
runtime's hook) didn't:

  1. **Pattern set was incomplete.** Local hook checked
     `sk-ant-|sk-proj-|ghp_|gho_|AKIA|mol_pk_|cfut_` — missing
     `ghs_*`, `ghu_*`, `ghr_*`, `github_pat_*`, `sk-svcacct-`,
     `sk-cp-`, `xox[baprs]-`, `ASIA*`. The historical leaks were 5×
     `ghs_*` (App installation tokens) + 1× `github_pat_*` — none of
     which the local hook would have caught even if it ran.
  2. **`*.md` and `docs/` were skip-listed.** The leaked tokens lived
     in `tick-reflections-temp.md`, `qa-audit-2026-04-21.md`, and
     `docs/incidents/INCIDENT_LOG.md` — exactly the file types the
     skip-list excluded. The hook ran and silently passed.

This commit:

- Replaces the local hook's hard-coded inline regex with the canonical
  13-pattern array (byte-aligned with `.github/workflows/secret-scan.yml`
  and the workspace runtime's `pre-commit-checks.sh`).
- Removes the `\.md$|docs/` skip — keeps only binary, lockfile, and
  hook-self exclusions.
- Adds the local hook to `lint_secret_pattern_drift.py` as an in-repo
  consumer (read-from-disk, no network — the hook lives in the same
  checkout the lint runs against). Drift now fails the lint when
  canonical changes without the local hook updating in lockstep.
- Adds `.githooks/pre-commit` to the drift-lint workflow's path
  filter so consumer-side edits also trigger the lint.
- Adopts the canonical's "don't echo the matched value" defense (the
  prior version would have round-tripped a leaked credential into
  scrollback / CI logs).

Verified: `python3 .github/scripts/lint_secret_pattern_drift.py`
reports both consumers aligned at 13 patterns. The hook's existing
six other gates (canvas 'use client', dark theme, SQL injection,
go-build, etc.) are untouched.

Companion change (already applied via API, no diff here):
`Scan diff for credential-shaped strings` is now in the required-checks
list on both `staging` and `main` branch protection — was previously a
soft gate (workflow ran, exited 1, but didn't block merge).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 23:47:56 -07:00
Hongming Wang
435e13e57e
Merge pull request #2504 from Molecule-AI/fix/restart-stop-retry-then-flag
fix(restart): retry cpProv.Stop with backoff + flag exhaustion as LEAK-SUSPECT
2026-05-02 06:40:58 +00:00
Hongming Wang
f18ee8598a fix(restart): retry cpProv.Stop with backoff + flag exhaustion as LEAK-SUSPECT
Both restart paths (interactive Restart handler + auto-restart's
stopForRestart) used to log-and-continue on cpProv.Stop failure. After
PR #2500 made CPProvisioner.Stop surface CP non-2xx as an error, those
paths became the actual leak generator: every transient CP/AWS hiccup =
one orphan EC2 alongside the freshly provisioned one. The 13 zombie
workspace EC2s on demo-prep staging traced to this exact path.

Adds cpStopWithRetry helper with bounded exponential backoff (3 attempts,
1s/2s/4s). Different policy from workspace_crud.go's Delete handler:
Delete returns 500 to the client on Stop failure (loud-fail-and-block —
user asked to destroy, silent leak unacceptable), whereas Restart's
contract is "make the workspace alive again" — refusing to reprovision
strands the user with a dead workspace. So this helper retries to absorb
transient failures, then on exhaustion emits a structured `LEAK-SUSPECT`
log line for the (forthcoming) CP-side workspace orphan reconciler to
correlate. Caller proceeds to reprovision regardless.

ctx-cancel exits the retry early without sleeping the backoff (matters
during shutdown drain); the cancel path emits a distinct log line and
deliberately does NOT emit LEAK-SUSPECT — operator-cancel and
retry-exhaustion are different signals and conflating them would noise
up the orphan-reconciler queue with workspaces we never had a chance to
retry.

Tests: 5 behavior tests covering every branch (no-op, first-try success,
eventual success, exhaustion, ctx-cancel) + 1 AST gate that pins the
helper-only invariant (any future inline `h.cpProv.Stop(...)` in
workspace_restart.go fires the gate, mutation-tested).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 23:36:38 -07:00
Hongming Wang
d64570a665
Merge pull request #2502 from Molecule-AI/fix/redeploy-main-use-staging-sha-tag
fix(redeploy-main): pull staging-<head_sha> instead of stale :latest
2026-05-02 06:30:32 +00:00
github-actions[bot]
2447c3da11
Merge pull request #2501 from Molecule-AI/staging
staging → main: auto-promote 23ee9b5
2026-05-01 23:27:52 -07:00
Hongming Wang
115f1f5e64 fix(redeploy-main): pull staging-<head_sha> instead of stale :latest
Auto-trigger from publish-workspace-server-image now resolves
target_tag to the just-published `staging-<short_head_sha>` digest
instead of `:latest`. Bypasses the dead retag path that was leaving
prod tenants on a 4-day-old image.

The chain pre-fix:
  publish-image  → pushes :staging-<sha> + :staging-latest (NOT :latest)
  canary-verify  → soft-skips (CANARY_TENANT_URLS unset, fleet not stood up)
  promote-latest → manual workflow_dispatch only, last run 2026-04-28
  redeploy-main  → pulls :latest → 2026-04-28 digest → all 3 tenants STALE

Today's incident:
  e7375348 (main) → publish-image green → redeploy fired → tenants
  pulled :latest (76c604fb digest from prior canary-verified state) →
  hongming /buildinfo returned 76c604fb instead of e7375348 → verify
  step correctly flagged 3/3 STALE → workflow failed.

Today's PRs (#2473 smoke wedge, #2487 panic recovery, #2496 sweeper
followups) shipped to GHCR as :staging-<sha> but never reached prod.

Fix:
  - workflow_dispatch input default '' (was 'latest'); empty input
    triggers auto-compute path
  - new "Compute target tag" step resolves:
    1. operator-supplied input → verbatim (rollback / pin)
    2. else → staging-<short_head_sha> (auto)
  - verify step's operator-pin detection now allows
    staging-<short_head_sha> as a non-pin (verification still runs)

When canary fleet is real, this workflow should chain on
canary-verify completion (workflow_run from canary-verify, gated on
promote-to-latest success) instead of publish-image — separate,
smaller PR. Today's fix unblocks prod deploys without that
prerequisite.

Companion: promote-latest.yml dispatched 2026-05-02 against
e7375348 to unstick existing prod tenants. This PR prevents
recurrence.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 23:17:59 -07:00
Hongming Wang
23ee9b5e53
Merge pull request #2500 from Molecule-AI/fix/cp-provisioner-stop-status-check
fix(cp-provisioner): surface CP non-2xx on Stop to plug EC2 leak
2026-05-02 06:02:08 +00:00
Hongming Wang
5167e482d0 fix(cp-provisioner): surface CP non-2xx on Stop to plug EC2 leak
http.Client.Do only errors on transport failure — a CP 5xx (AWS
hiccup, missing IAM, transient outage) was silently treated as
success. Workspace row then flipped to status='removed' and the EC2
stayed alive forever with no DB pointer (the "orphan EC2 on a
0-customer account" scenario flagged in workspace_crud.go #1843).
Found while triaging 13 zombie workspace EC2s on demo-prep staging.

Adds a status-code check that returns an error tagged with the
workspace ID + status + bounded body excerpt, so the existing
loud-fail path in workspace_crud.go's Delete handler can populate
stop_failures and surface a 500. Body read is io.LimitReader-capped
at 512 bytes to keep error logs sane during a CP outage.

Tests: 4 new (5xx surfaces, 4xx surfaces, 2xx variants 200/202/204
all succeed, long body is truncated). Test-first verified — the
first three fail on the buggy code and all four pass on the fix.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 22:59:01 -07:00
Hongming Wang
e7375348e2
Merge pull request #2442 from Molecule-AI/staging
staging → main: auto-promote 5b70204
2026-05-01 22:52:03 -07:00
Hongming Wang
81ee0cbd55
Merge pull request #2498 from Molecule-AI/auto-sync/main-76c604fb
chore: sync main → staging (manual, merge 76c604fb)
2026-05-02 05:34:16 +00:00
Hongming Wang
dca442e87a Merge main into staging — backfill 76c604fb merge commit
Mirrors what auto-sync-main-to-staging.yml would have produced if its
on:push trigger had fired for the GITHUB_TOKEN-initiated merge of PR
#2437 (staging→main) on 2026-05-01. Per the diagnosis in PR #2497,
that push was suppressed by GitHub's no-recursion rule, leaving
staging missing main's merge commit and dead-locking PR #2442
(Phase 2 promote) on mergeStateStatus: BEHIND.

This sync absorbs only the merge commit 76c604fb (no code-change
diff — it's a merge of staging back to itself from a prior round).
The proper fix (PR #2497) makes this self-healing for future rounds.
2026-05-01 22:31:53 -07:00
Hongming Wang
bae34039e2
Merge pull request #2497 from Molecule-AI/ci/fix-auto-sync-no-recursion
ci(auto-sync): App-token dispatch + ubuntu-latest + workflow_dispatch
2026-05-02 05:30:52 +00:00
Hongming Wang
3d8a0a58fa ci(auto-sync): App-token dispatch + ubuntu-latest + workflow_dispatch
auto-sync-main-to-staging.yml hasn't fired since 2026-04-29 despite
multiple staging→main promotes since. The promote PR #2442 (Phase 2)
has been wedged on `mergeStateStatus: BEHIND` for hours because
staging is missing the merge commit from PR #2437.

Three compounding bugs, all fixed here:

1. **GitHub no-recursion suppresses the `on: push` trigger.**
   When the merge queue lands a staging→main promote, the resulting
   push to main is "by GITHUB_TOKEN", and per
   https://docs.github.com/en/actions/using-workflows/triggering-a-workflow#triggering-a-workflow-from-a-workflow
   that push event does NOT fire any downstream workflows. Verified
   empirically against SHA 76c604fb (PR #2437): exactly ONE workflow
   fired on that push — `publish-workspace-server-image`, dispatched
   explicitly by auto-promote-staging.yml's polling tail with an App
   token (the documented #2357 workaround). Every other `on: push`
   workflow on main, including auto-sync, was silently suppressed.

   Same fix extended here: auto-promote-staging.yml's polling tail
   now ALSO dispatches `auto-sync-main-to-staging.yml --ref main`
   via the App token after the merge lands. App-initiated dispatch
   propagates `workflow_run` cascades, which is what the publish
   tail relies on too. Failure path: emits `::error::` with the
   recovery command — operator runs it once and the next promote
   self-heals.

   auto-sync.yml gains `workflow_dispatch:` so it can be invoked
   from the dispatch above + manually if a future promote also
   misses (defense in depth).

2. **`runs-on: [self-hosted, macos, arm64]` was wrong for this repo.**
   Comment claimed "matches the rest of this repo's workflows" — false:
   this is the ONLY workflow in molecule-core/.github/workflows/ with
   a non-ubuntu runs-on. Copy-paste artefact from molecule-controlplane
   (which IS private and has a Mac runner). molecule-core has no Mac
   runner registered, so even when the trigger DID fire (the 3 historic
   manual-UI merges), the job would have sat unassigned if the runner
   were offline. Switched to `ubuntu-latest` to match every other
   workflow in this repo.

3. **The `on: push` trigger remains** as a defense-in-depth path for
   the rare case of a manual UI merge by a real user (which uses
   their PAT and DOES fire downstream workflows — confirmed via the
   2026-04-29 d35a2420 run with `triggering_actor=HongmingWang-Rabbit`
   that fired 16 workflows including auto-sync). Belt-and-suspenders.

Long-term: switching auto-promote's `gh pr merge --auto` call to use
the App token (instead of GITHUB_TOKEN) would let `on: push` triggers
fire naturally and obviate the need for the explicit dispatches in
the polling tail. Tracked in #2357 — out of scope here.

Operator recovery for the current Phase 2 wedge: after this lands on
staging, dispatch auto-sync once via
`gh workflow run auto-sync-main-to-staging.yml --ref main` to
backfill the missed sync from 76c604fb. PR #2442 will go from
BEHIND → CLEAN and auto-merge.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 22:28:35 -07:00
Hongming Wang
91766e68e7
Merge pull request #2496 from Molecule-AI/followup/sweeper-cleanup
test(sweeper): integration coverage + accessor consolidation (#2494 follow-ups)
2026-05-02 05:03:41 +00:00
Hongming Wang
77882c920e
Merge pull request #2495 from Molecule-AI/harness/phase-2-followup-review-nits
harness(phase-2-followup): fix assert_status mislabel + honest race comment
2026-05-02 05:02:15 +00:00
Hongming Wang
0064f02c00 test(sweeper): integration coverage for manifest-override + accessor consolidation
Two follow-ups from PR #2494's review:

1. Two new sweep tests exercise the lookup path through
   sweepStuckProvisioning end-to-end:
     - ManifestOverrideSparesRow: claude-code 11min old, manifest=20min
       → no UPDATE, no broadcast (sparing works through the sweeper)
     - ManifestOverrideStillFlipsPastDeadline: claude-code 21min old,
       manifest=20min → flipped + payload.timeout_secs=1200
   Closes the gap that the unit-test on provisioningTimeoutFor alone
   left open: a future refactor could drop the lookup arg from the
   sweeper's call and only the unit test caught it. Verified by
   regression-injecting `lookup→nil` in sweepStuckProvisioning — both
   new tests fail, the old ones still pass.

2. addProvisionTimeoutMs now goes through ProvisionTimeoutSecondsForRuntime
   instead of calling provisionTimeouts.get directly. Single accessor
   path for the same data — the canvas response and the sweeper now
   resolve identically by construction.

No production behavior change; tests + accessor cleanup only.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 22:00:36 -07:00
Hongming Wang
a15972066b harness(phase-2-followup): fix assert_status mislabel + honest race comment
Two review nits from PR #2493 that don't affect correctness but matter
for honesty in the harness's own self-documentation:

1. tenant-isolation.sh F3/F4 used assert_status for non-HTTP values.
   LEAKED_INTO_ALPHA/BETA are jq-derived counts, not HTTP codes — but
   the assertion ran through assert_status, which formats the result
   as "(HTTP 0)". Anyone reading the test output would believe these
   assertions involved an HTTP call. Adds a plain `assert` helper
   matching per-tenant-independence.sh's pattern, and uses it on the
   two count comparisons.

2. per-tenant-independence.sh Phase F over-claimed coverage.
   The comment said the concurrent-INSERT race catches "shared-pool
   corruption" + "lib/pq prepared-statement cache collision". Both
   are real failure modes — but neither can fire across tenants in
   THIS topology, because each tenant owns its own DATABASE_URL and
   its own postgres-{alpha,beta} container. The comment now lists
   only what the test actually catches (redis cross-keyspace bleed,
   shared cp-stub state corruption, cf-proxy buffer mixup) and notes
   that a future shared-Postgres variant is the right place for the
   lib/pq cache assertion.

No behavioural change — both replays still pass 13/13 + 12/12, all six
replays pass on a clean run-all-replays.sh boot.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 22:00:04 -07:00
Hongming Wang
40e09508b6
Merge pull request #2494 from Molecule-AI/fix/sweeper-honor-template-timeout
fix(sweeper): honour template-manifest provision_timeout_seconds
2026-05-02 04:47:53 +00:00
Hongming Wang
18edf88d59 fix(sweeper): honour template-manifest provision_timeout_seconds
Real wiring gap discovered while investigating issue #2486 cluster of
prod claude-code workspaces failed at exactly 10m. The
runtimeProvisionTimeoutsCache (#2054 phase 2) reads
runtime_config.provision_timeout_seconds from each template's
config.yaml so the **canvas** spinner respects per-template timeouts —
but the **sweeper** in registry/provisiontimeout.go hardcoded 10 min
(claude-code) / 30 min (hermes) and never consulted the manifest. So a
template that declared a longer window had a UI that waited correctly
but a sweeper that killed the row at the hardcoded floor anyway.

Resolution order pinned by new TestProvisioningTimeout_ManifestOverride:

  1. PROVISION_TIMEOUT_SECONDS env (ops-debug global override)
  2. Template manifest lookup (per-runtime, beats hermes default too)
  3. Hermes default (30 min — CP bootstrap-watcher 25 min + 5 min slack)
  4. DefaultProvisioningTimeout (10 min)

Wiring:
  - registry: new RuntimeTimeoutLookup function type, threaded through
    StartProvisioningTimeoutSweep + sweepStuckProvisioning + the
    pre-existing provisioningTimeoutFor.
  - handlers: ProvisionTimeoutSecondsForRuntime exposes the cache's
    lookup as a method so main.go can pass it without breaking the
    handlers→registry import direction.
  - cmd/server/main.go: wire wh.ProvisionTimeoutSecondsForRuntime into
    the sweep boot.

Verified:
  - go test -race ./... passes (every workspace-server package).
  - Regression-injected the lookup arm: 3 manifest-override subcases
    fail with the actual-vs-expected gap, confirming the new test is
    load-bearing.
  - The original two timeout tests (env-override, hermes default) keep
    passing — `lookup=nil` argument preserves their semantics.

Operator action enabled: a template wanting a 15-min window can now
just set `runtime_config.provision_timeout_seconds: 900` in its
config.yaml and the sweeper honours it on the next workspace-server
restart.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 21:44:42 -07:00
Hongming Wang
3ca2f40e16
Merge pull request #2493 from Molecule-AI/harness/phase-2-multi-tenant
harness(phase-2): multi-tenant compose + cross-tenant isolation replays
2026-05-02 04:39:09 +00:00
Hongming Wang
c275716005 harness(phase-2): multi-tenant compose + cross-tenant isolation replays
Brings the local harness from "single tenant covering the request path"
to "two tenants covering both the request path AND the per-tenant
isolation boundary" — the same shape production runs (one EC2 + one
Postgres + one MOLECULE_ORG_ID per tenant).

Why this matters: the four prior replays exercise the SaaS request
path against one tenant. They cannot prove that TenantGuard rejects
a misrouted request (production CF tunnel + AWS LB are the failure
surface), nor that two tenants doing legitimate work in parallel
keep their `activity_logs` / `workspaces` / connection-pool state
partitioned. Both are real bug classes — TenantGuard allowlist drift
shipped #2398, lib/pq prepared-statement cache collision is documented
as an org-wide hazard.

What changed:

1. compose.yml — split into two tenants.
   tenant-alpha + postgres-alpha + tenant-beta + postgres-beta + the
   shared cp-stub, redis, cf-proxy. Each tenant gets a distinct
   ADMIN_TOKEN + MOLECULE_ORG_ID and its own Postgres database. cf-proxy
   depends on both tenants becoming healthy.

2. cf-proxy/nginx.conf — Host-header → tenant routing.
   `map $host $tenant_upstream` resolves the right backend per request.
   Required `resolver 127.0.0.11 valid=30s ipv6=off;` because nginx
   needs an explicit DNS resolver to use a variable in `proxy_pass`
   (literal hostnames resolve once at startup; variables resolve per
   request — without the resolver nginx fails closed with 502).
   `server_name` lists both tenants + the legacy alias so unknown Host
   headers don't silently route to a default and mask routing bugs.

3. _curl.sh — per-tenant + cross-tenant-negative helpers.
   `curl_alpha_admin` / `curl_beta_admin` set the right
   Host + Authorization + X-Molecule-Org-Id triple.
   `curl_alpha_creds_at_beta` / `curl_beta_creds_at_alpha` exist
   precisely to make WRONG requests (replays use them to assert
   TenantGuard rejects). `psql_exec_alpha` / `psql_exec_beta` shell out
   per-tenant Postgres exec. Legacy aliases (`curl_admin`, `psql_exec`)
   keep the four pre-Phase-2 replays working without edits.

4. seed.sh — registers parent+child workspaces in BOTH tenants.
   Captures server-generated IDs via `jq -r '.id'` (POST /workspaces
   ignores body.id, so the older client-side mint silently desynced
   from the workspaces table and broke FK-dependent replays). Stashes
   `ALPHA_PARENT_ID` / `ALPHA_CHILD_ID` / `BETA_PARENT_ID` /
   `BETA_CHILD_ID` to .seed.env, plus legacy `ALPHA_ID` / `BETA_ID`
   aliases for backwards compat with chat-history / channel-envelope.

5. New replays.

   tenant-isolation.sh (13 assertions) — TenantGuard 404s any request
   whose X-Molecule-Org-Id doesn't match the container's
   MOLECULE_ORG_ID. Asserts the 404 body has zero
   tenant/org/forbidden/denied keywords (existence of a tenant must
   not be probable from the outside). Covers cross-tenant routing
   misconfigure + allowlist drift + missing-org-header.

   per-tenant-independence.sh (12 assertions) — both tenants seed
   activity_logs in parallel with distinct row counts (3 vs 5) and
   confirm each tenant's history endpoint returns exactly its own
   counts. Then a concurrent INSERT race (10 rows per tenant in
   parallel via `&` + wait) catches shared-pool corruption +
   prepared-statement cache poisoning + redis cross-keyspace bleed.

6. Bug fix: down.sh + dump-logs SECRETS_ENCRYPTION_KEY validation.
   `docker compose down -v` validates the entire compose file even
   though it doesn't read the env. up.sh generates a per-run key into
   its own shell — down.sh runs in a fresh shell that wouldn't see it,
   so without a placeholder `compose down` exited non-zero before
   removing volumes. Workspaces silently leaked into the next
   ./up.sh + seed.sh boot. Caught when tenant-isolation.sh F1/F2 saw
   3× duplicate alpha-parent rows accumulated across three prior runs.
   Same fix applied to the workflow's dump-logs step.

7. requirements.txt — pin molecule-ai-workspace-runtime>=0.1.78.
   channel-envelope-trust-boundary.sh imports from `molecule_runtime.*`
   (the wheel-rewritten path) so it catches the failure mode where
   the wheel build silently strips a fix that unit tests on local
   source still pass. CI was failing this replay because the wheel
   wasn't installed — caught in the staging push run from #2492.

8. .github/workflows/harness-replays.yml — Phase 2 plumbing.
   * Removed /etc/hosts step (Host-header path eliminated the need;
     scripts already source _curl.sh).
   * Updated dump-logs to reference the new service names
     (tenant-alpha + tenant-beta + postgres-alpha + postgres-beta).
   * Added SECRETS_ENCRYPTION_KEY placeholder env on the dump step.

Verified: ./run-all-replays.sh from a clean state — 6/6 passed
(buildinfo-stale-image, channel-envelope-trust-boundary, chat-history,
peer-discovery-404, per-tenant-independence, tenant-isolation).

Roadmap section updated: Phase 2 marked shipped. Phase 3 promoted to
"replace cp-stub with real molecule-controlplane Docker build + env
coherence lint."

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 21:36:40 -07:00
Hongming Wang
093e5038d2
Merge pull request #2491 from Molecule-AI/followup/provision-panic-test-hardening
test(provision): harden panic tests with re-raise guard + broadcast count
2026-05-02 03:18:03 +00:00