sweep(internal#219 §1): PR#386
Some checks failed
E2E Staging SaaS (full lifecycle) / E2E Staging SaaS (push) Failing after 4m45s
CI / Detect changes (push) Waiting to run
Harness Replays / detect-changes (push) Failing after 12s
Harness Replays / Harness Replays (push) Has been skipped
CI / Platform (Go) (push) Blocked by required conditions
CI / Canvas (Next.js) (push) Blocked by required conditions
CI / Shellcheck (E2E scripts) (push) Blocked by required conditions
CI / Canvas Deploy Reminder (push) Blocked by required conditions
CI / Python Lint & Test (push) Blocked by required conditions
E2E API Smoke Test / detect-changes (push) Waiting to run
E2E API Smoke Test / E2E API Smoke Test (push) Blocked by required conditions
E2E Staging Canvas (Playwright) / detect-changes (push) Waiting to run
E2E Staging Canvas (Playwright) / Canvas tabs E2E (push) Blocked by required conditions
Handlers Postgres Integration / detect-changes (push) Waiting to run
E2E Staging External Runtime / E2E Staging External Runtime (push) Successful in 5m18s
Handlers Postgres Integration / Handlers Postgres Integration (push) Blocked by required conditions
Secret scan / Scan diff for credential-shaped strings (push) Waiting to run
Some checks failed
E2E Staging SaaS (full lifecycle) / E2E Staging SaaS (push) Failing after 4m45s
CI / Detect changes (push) Waiting to run
Harness Replays / detect-changes (push) Failing after 12s
Harness Replays / Harness Replays (push) Has been skipped
CI / Platform (Go) (push) Blocked by required conditions
CI / Canvas (Next.js) (push) Blocked by required conditions
CI / Shellcheck (E2E scripts) (push) Blocked by required conditions
CI / Canvas Deploy Reminder (push) Blocked by required conditions
CI / Python Lint & Test (push) Blocked by required conditions
E2E API Smoke Test / detect-changes (push) Waiting to run
E2E API Smoke Test / E2E API Smoke Test (push) Blocked by required conditions
E2E Staging Canvas (Playwright) / detect-changes (push) Waiting to run
E2E Staging Canvas (Playwright) / Canvas tabs E2E (push) Blocked by required conditions
Handlers Postgres Integration / detect-changes (push) Waiting to run
E2E Staging External Runtime / E2E Staging External Runtime (push) Successful in 5m18s
Handlers Postgres Integration / Handlers Postgres Integration (push) Blocked by required conditions
Secret scan / Scan diff for credential-shaped strings (push) Waiting to run
This commit is contained in:
commit
03b27adeab
310
.gitea/workflows/canary-staging.yml
Normal file
310
.gitea/workflows/canary-staging.yml
Normal file
@ -0,0 +1,310 @@
|
||||
name: Canary — staging SaaS smoke (every 30 min)
|
||||
|
||||
# Ported from .github/workflows/canary-staging.yml on 2026-05-11 per RFC
|
||||
# internal#219 §1 sweep. Differences from the GitHub version:
|
||||
# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them
|
||||
# per feedback_gitea_workflow_dispatch_inputs_unsupported).
|
||||
# - Dropped `merge_group:` (no Gitea merge queue).
|
||||
# - Dropped `environment:` blocks (Gitea has no environments).
|
||||
# - Workflow-level env.GITHUB_SERVER_URL pinned per
|
||||
# feedback_act_runner_github_server_url.
|
||||
# - `continue-on-error: true` on each job (RFC §1 contract).
|
||||
#
|
||||
|
||||
# Minimum viable health check: provisions one Hermes workspace on a fresh
|
||||
# staging org, sends one A2A message, verifies PONG, tears down. ~8 min
|
||||
# wall clock. Pages on failure by opening a GitHub issue; auto-closes the
|
||||
# issue on the next green run.
|
||||
#
|
||||
# The full-SaaS workflow (e2e-staging-saas.yml) covers the broader surface
|
||||
# but runs only on provisioning-critical pushes + nightly — this one
|
||||
# catches drift in the 30-min window between those runs (AMI health, CF
|
||||
# cert rotation, WorkOS session stability, etc.).
|
||||
#
|
||||
# Lean mode: E2E_MODE=canary skips the child workspace + HMA memory +
|
||||
# peers/activity checks. One parent workspace + one A2A turn is enough
|
||||
# to signal "SaaS stack end-to-end is alive."
|
||||
|
||||
on:
|
||||
schedule:
|
||||
# Every 30 min. Cron on GitHub-hosted runners has a known drift of
|
||||
# a few minutes under load — that's fine for a canary.
|
||||
- cron: '*/30 * * * *'
|
||||
# Serialise with the full-SaaS workflow so they don't contend for the
|
||||
# same org-create quota on staging. Different group key from
|
||||
# e2e-staging-saas since we don't mind queueing canaries behind one
|
||||
# full run, but two canaries SHOULD queue against each other.
|
||||
concurrency:
|
||||
group: canary-staging
|
||||
cancel-in-progress: false
|
||||
|
||||
permissions:
|
||||
# Needed to open / close the alerting issue.
|
||||
issues: write
|
||||
contents: read
|
||||
|
||||
env:
|
||||
GITHUB_SERVER_URL: https://git.moleculesai.app
|
||||
|
||||
jobs:
|
||||
canary:
|
||||
name: Canary smoke
|
||||
runs-on: ubuntu-latest
|
||||
# Phase 3 (RFC #219 §1): surface broken workflows without blocking.
|
||||
continue-on-error: true
|
||||
# 25 min headroom over the 15-min TLS-readiness deadline in
|
||||
# tests/e2e/test_staging_full_saas.sh (#2107). Without the buffer
|
||||
# the job is killed at the wall-clock 15:00 mark BEFORE the bash
|
||||
# `fail` + diagnostic burst can fire, leaving every cancellation
|
||||
# silent. Sibling staging E2E jobs run at 20-45 min — keeping
|
||||
# canary tighter than them so a true wedge still surfaces here
|
||||
# first.
|
||||
timeout-minutes: 25
|
||||
|
||||
env:
|
||||
MOLECULE_CP_URL: https://staging-api.moleculesai.app
|
||||
MOLECULE_ADMIN_TOKEN: ${{ secrets.MOLECULE_STAGING_ADMIN_TOKEN }}
|
||||
# MiniMax is the canary's PRIMARY LLM auth path post-2026-05-04.
|
||||
# Switched from hermes+OpenAI after #2578 (the staging OpenAI key
|
||||
# account went over quota and stayed dead for 36+ hours, taking
|
||||
# the canary red the entire time). claude-code template's
|
||||
# `minimax` provider routes ANTHROPIC_BASE_URL to
|
||||
# api.minimax.io/anthropic and reads MINIMAX_API_KEY at boot —
|
||||
# ~5-10x cheaper per token than gpt-4.1-mini AND on a separate
|
||||
# billing account, so OpenAI quota collapse no longer wedges the
|
||||
# canary. Mirrors the migration continuous-synth-e2e.yml made on
|
||||
# 2026-05-03 (#265) for the same reason. tests/e2e/test_staging_
|
||||
# full_saas.sh branches SECRETS_JSON on which key is present —
|
||||
# MiniMax wins when set.
|
||||
E2E_MINIMAX_API_KEY: ${{ secrets.MOLECULE_STAGING_MINIMAX_API_KEY }}
|
||||
# Direct-Anthropic alternative for operators who don't want to
|
||||
# set up a MiniMax account (priority below MiniMax — first
|
||||
# non-empty wins in test_staging_full_saas.sh's secrets-injection
|
||||
# block). See #2578 PR comment for the rationale.
|
||||
E2E_ANTHROPIC_API_KEY: ${{ secrets.MOLECULE_STAGING_ANTHROPIC_API_KEY }}
|
||||
# OpenAI fallback — kept wired so an operator-dispatched run with
|
||||
# E2E_RUNTIME=hermes overridden via workflow_dispatch can still
|
||||
# exercise the OpenAI path without re-editing the workflow.
|
||||
E2E_OPENAI_API_KEY: ${{ secrets.MOLECULE_STAGING_OPENAI_KEY }}
|
||||
E2E_MODE: canary
|
||||
E2E_RUNTIME: claude-code
|
||||
# Pin the canary to a specific MiniMax model rather than relying
|
||||
# on the per-runtime default (which could resolve to "sonnet" →
|
||||
# direct Anthropic and defeat the cost saving). M2.7-highspeed
|
||||
# is "Token Plan only" but cheap-per-token and fast.
|
||||
E2E_MODEL_SLUG: MiniMax-M2.7-highspeed
|
||||
E2E_RUN_ID: "canary-${{ github.run_id }}"
|
||||
# Debug-only: when an operator dispatches with keep_on_failure=true,
|
||||
# the canary script's E2E_KEEP_ORG=1 path skips teardown so the
|
||||
# tenant org + EC2 stay alive for SSM-based log capture. Cron runs
|
||||
# never set this (the input only exists on workflow_dispatch) so
|
||||
# unattended cron always tears down. See molecule-core#129
|
||||
# failure mode #1 — capturing the actual exception requires
|
||||
# docker logs from the live container.
|
||||
E2E_KEEP_ORG: ${{ github.event.inputs.keep_on_failure == 'true' && '1' || '0' }}
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
|
||||
- name: Verify admin token present
|
||||
run: |
|
||||
if [ -z "$MOLECULE_ADMIN_TOKEN" ]; then
|
||||
echo "::error::MOLECULE_STAGING_ADMIN_TOKEN not set"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
- name: Verify LLM key present
|
||||
run: |
|
||||
# Per-runtime key check — claude-code uses MiniMax; hermes /
|
||||
# langgraph (operator-dispatched only) use OpenAI. Hard-fail
|
||||
# rather than soft-skip per the lesson from synth E2E #2578:
|
||||
# an empty key silently falls through to the wrong
|
||||
# SECRETS_JSON branch and the canary fails 5 min later with
|
||||
# a confusing auth error instead of the clean "secret
|
||||
# missing" message at the top.
|
||||
case "${E2E_RUNTIME}" in
|
||||
claude-code)
|
||||
# Either MiniMax OR direct-Anthropic works — first
|
||||
# non-empty wins in the test script's secrets-injection
|
||||
# priority chain. Operators only need to set ONE of these
|
||||
# secrets; we don't force a choice between them.
|
||||
if [ -n "${E2E_MINIMAX_API_KEY:-}" ]; then
|
||||
required_secret_name="MOLECULE_STAGING_MINIMAX_API_KEY"
|
||||
required_secret_value="${E2E_MINIMAX_API_KEY}"
|
||||
elif [ -n "${E2E_ANTHROPIC_API_KEY:-}" ]; then
|
||||
required_secret_name="MOLECULE_STAGING_ANTHROPIC_API_KEY"
|
||||
required_secret_value="${E2E_ANTHROPIC_API_KEY}"
|
||||
else
|
||||
required_secret_name="MOLECULE_STAGING_MINIMAX_API_KEY or MOLECULE_STAGING_ANTHROPIC_API_KEY"
|
||||
required_secret_value=""
|
||||
fi
|
||||
;;
|
||||
langgraph|hermes)
|
||||
required_secret_name="MOLECULE_STAGING_OPENAI_KEY"
|
||||
required_secret_value="${E2E_OPENAI_API_KEY:-}"
|
||||
;;
|
||||
*)
|
||||
echo "::warning::Unknown E2E_RUNTIME='${E2E_RUNTIME}' — skipping LLM-key check"
|
||||
required_secret_name=""
|
||||
required_secret_value="present"
|
||||
;;
|
||||
esac
|
||||
if [ -n "$required_secret_name" ] && [ -z "$required_secret_value" ]; then
|
||||
echo "::error::${required_secret_name} secret not set for runtime=${E2E_RUNTIME} — A2A will fail at request time with 'No LLM provider configured'"
|
||||
exit 2
|
||||
fi
|
||||
echo "LLM key present ✓ (runtime=${E2E_RUNTIME}, key=${required_secret_name}, len=${#required_secret_value})"
|
||||
|
||||
- name: Canary run
|
||||
id: canary
|
||||
run: bash tests/e2e/test_staging_full_saas.sh
|
||||
|
||||
# Alerting: open a sticky issue on the FIRST failure; comment on
|
||||
# subsequent failures; auto-close on next green. Comment-on-existing
|
||||
# de-duplicates so a single open issue accumulates the streak —
|
||||
# ops sees one issue with N comments rather than N issues.
|
||||
#
|
||||
# Why no consecutive-failures threshold (e.g., wait 3 runs before
|
||||
# filing): the prior threshold check used
|
||||
# `github.rest.actions.listWorkflowRuns()` which Gitea 1.22.6 does
|
||||
# not expose (returns 404). On Gitea Actions the threshold call
|
||||
# ALWAYS failed, breaking the entire alerting step and going days
|
||||
# silent on real regressions (38h+ chronic red on 2026-05-07/08
|
||||
# before this fix; tracked in molecule-core#129). Filing on first
|
||||
# failure is also better UX — we want to know about the first red,
|
||||
# not wait 90 min for it to "count." Real flakes get one issue +
|
||||
# a quick close-on-green; persistent reds accumulate comments.
|
||||
- name: Open issue on failure (Gitea API)
|
||||
if: failure()
|
||||
env:
|
||||
GITEA_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
REPO: ${{ github.repository }}
|
||||
SERVER_URL: ${{ env.GITHUB_SERVER_URL }}
|
||||
RUN_ID: ${{ github.run_id }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
API="${SERVER_URL%/}/api/v1"
|
||||
TITLE="Canary failing: staging SaaS smoke"
|
||||
RUN_URL="${SERVER_URL}/${REPO}/actions/runs/${RUN_ID}"
|
||||
|
||||
EXISTING=$(curl -fsS -H "Authorization: token $GITEA_TOKEN" \
|
||||
"${API}/repos/${REPO}/issues?state=open&type=issues&limit=50" \
|
||||
| jq -r --arg t "$TITLE" '.[] | select(.title==$t) | .number' | head -1)
|
||||
|
||||
if [ -n "$EXISTING" ]; then
|
||||
curl -fsS -X POST -H "Authorization: token $GITEA_TOKEN" -H "Content-Type: application/json" \
|
||||
"${API}/repos/${REPO}/issues/${EXISTING}/comments" \
|
||||
-d "$(jq -nc --arg run "$RUN_URL" '{body: ("Canary still failing. " + $run)}')" >/dev/null
|
||||
echo "Commented on existing issue #${EXISTING}"
|
||||
else
|
||||
NOW=$(date -u +%Y-%m-%dT%H:%M:%SZ)
|
||||
BODY=$(jq -nc --arg t "$TITLE" --arg now "$NOW" --arg run "$RUN_URL" \
|
||||
'{title: $t, body: ("Canary run failed at " + $now + ".\n\nRun: " + $run + "\n\nThis issue auto-closes on the next green canary run. Consecutive failures add a comment here rather than a new issue.")}')
|
||||
curl -fsS -X POST -H "Authorization: token $GITEA_TOKEN" -H "Content-Type: application/json" \
|
||||
"${API}/repos/${REPO}/issues" -d "$BODY" >/dev/null
|
||||
echo "Opened canary failure issue (first red)"
|
||||
fi
|
||||
|
||||
- name: Auto-close canary issue on success (Gitea API)
|
||||
if: success()
|
||||
env:
|
||||
GITEA_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
REPO: ${{ github.repository }}
|
||||
SERVER_URL: ${{ env.GITHUB_SERVER_URL }}
|
||||
RUN_ID: ${{ github.run_id }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
API="${SERVER_URL%/}/api/v1"
|
||||
TITLE="Canary failing: staging SaaS smoke"
|
||||
|
||||
NUMS=$(curl -fsS -H "Authorization: token $GITEA_TOKEN" \
|
||||
"${API}/repos/${REPO}/issues?state=open&type=issues&limit=50" \
|
||||
| jq -r --arg t "$TITLE" '.[] | select(.title==$t) | .number')
|
||||
|
||||
NOW=$(date -u +%Y-%m-%dT%H:%M:%SZ)
|
||||
for N in $NUMS; do
|
||||
curl -fsS -X POST -H "Authorization: token $GITEA_TOKEN" -H "Content-Type: application/json" \
|
||||
"${API}/repos/${REPO}/issues/${N}/comments" \
|
||||
-d "$(jq -nc --arg now "$NOW" '{body: ("Canary recovered at " + $now + ". Closing.")}')" >/dev/null
|
||||
curl -fsS -X PATCH -H "Authorization: token $GITEA_TOKEN" -H "Content-Type: application/json" \
|
||||
"${API}/repos/${REPO}/issues/${N}" -d '{"state":"closed"}' >/dev/null
|
||||
echo "Closed recovered canary issue #${N}"
|
||||
done
|
||||
|
||||
- name: Teardown safety net
|
||||
if: always()
|
||||
env:
|
||||
ADMIN_TOKEN: ${{ secrets.MOLECULE_STAGING_ADMIN_TOKEN }}
|
||||
run: |
|
||||
set +e
|
||||
# Slug prefix matches what test_staging_full_saas.sh emits
|
||||
# in canary mode:
|
||||
# SLUG="e2e-canary-$(date +%Y%m%d)-${RUN_ID_SUFFIX}"
|
||||
# Earlier this was `e2e-{today}-canary-` — that was the
|
||||
# full-mode pattern (date FIRST, mode SECOND); canary slugs
|
||||
# have mode FIRST, date SECOND. The mismatch silently
|
||||
# never matched, leaving every cancelled-canary EC2 alive
|
||||
# until the once-an-hour sweep eventually caught it
|
||||
# (incident 2026-04-26 21:03Z: 1h25m EC2 leak before manual
|
||||
# cleanup; same gap on three earlier cancellations today).
|
||||
orgs=$(curl -sS "$MOLECULE_CP_URL/cp/admin/orgs" \
|
||||
-H "Authorization: Bearer $ADMIN_TOKEN" 2>/dev/null \
|
||||
| python3 -c "
|
||||
import json, sys, os, datetime
|
||||
run_id = os.environ.get('GITHUB_RUN_ID', '')
|
||||
d = json.load(sys.stdin)
|
||||
# Scope to slugs from THIS canary run when GITHUB_RUN_ID is
|
||||
# available; the canary workflow sets E2E_RUN_ID='canary-\${run_id}'
|
||||
# so the slug suffix is '-canary-\${run_id}-...'. Mirrors the
|
||||
# full-mode safety net's per-run scoping (e2e-staging-saas.yml)
|
||||
# added after the 2026-04-21 cross-run cleanup incident.
|
||||
# Sweep both today AND yesterday's UTC dates so a run that
|
||||
# crosses midnight still cleans up its own slug — see the
|
||||
# 2026-04-26→27 canvas-safety-net incident.
|
||||
today = datetime.date.today()
|
||||
yesterday = today - datetime.timedelta(days=1)
|
||||
dates = (today.strftime('%Y%m%d'), yesterday.strftime('%Y%m%d'))
|
||||
if run_id:
|
||||
prefixes = tuple(f'e2e-canary-{d}-canary-{run_id}' for d in dates)
|
||||
else:
|
||||
prefixes = tuple(f'e2e-canary-{d}-' for d in dates)
|
||||
candidates = [o['slug'] for o in d.get('orgs', [])
|
||||
if any(o.get('slug','').startswith(p) for p in prefixes)
|
||||
and o.get('status') not in ('purged',)]
|
||||
print('\n'.join(candidates))
|
||||
" 2>/dev/null)
|
||||
# Per-slug DELETE with HTTP-code verification. The previous
|
||||
# `... >/dev/null || true` swallowed every failure, so a 5xx
|
||||
# or timeout from CP looked identical to "successfully cleaned
|
||||
# up" and the tenant kept eating ~2 vCPU until the hourly
|
||||
# stale sweep caught it (up to 2h later). Now we capture the
|
||||
# response code and surface non-2xx as a workflow warning, so
|
||||
# the run page shows which slug leaked. We still don't `exit 1`
|
||||
# on cleanup failure — a single-canary cleanup miss shouldn't
|
||||
# fail-flag the canary itself when the actual smoke check
|
||||
# passed. The sweep-stale-e2e-orgs cron (now every 15 min,
|
||||
# 30-min threshold) is the safety net for whatever slips past.
|
||||
# See molecule-controlplane#420.
|
||||
leaks=()
|
||||
for slug in $orgs; do
|
||||
# Tempfile-routed -w + set +e/-e prevents curl-exit-code
|
||||
# pollution of the captured status (lint-curl-status-capture.yml).
|
||||
set +e
|
||||
curl -sS -o /tmp/canary-cleanup.out -w "%{http_code}" \
|
||||
-X DELETE "$MOLECULE_CP_URL/cp/admin/tenants/$slug" \
|
||||
-H "Authorization: Bearer $ADMIN_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{\"confirm\":\"$slug\"}" >/tmp/canary-cleanup.code
|
||||
set -e
|
||||
code=$(cat /tmp/canary-cleanup.code 2>/dev/null || echo "000")
|
||||
if [ "$code" = "200" ] || [ "$code" = "204" ]; then
|
||||
echo "[teardown] deleted $slug (HTTP $code)"
|
||||
else
|
||||
echo "::warning::canary teardown for $slug returned HTTP $code — sweep-stale-e2e-orgs will catch it within ~45 min. Body: $(head -c 300 /tmp/canary-cleanup.out 2>/dev/null)"
|
||||
leaks+=("$slug")
|
||||
fi
|
||||
done
|
||||
if [ ${#leaks[@]} -gt 0 ]; then
|
||||
echo "::warning::canary teardown left ${#leaks[@]} leak(s): ${leaks[*]}"
|
||||
fi
|
||||
exit 0
|
||||
276
.gitea/workflows/canary-verify.yml
Normal file
276
.gitea/workflows/canary-verify.yml
Normal file
@ -0,0 +1,276 @@
|
||||
name: canary-verify
|
||||
|
||||
# Ported from .github/workflows/canary-verify.yml on 2026-05-11 per RFC
|
||||
# internal#219 §1 sweep. Differences from the GitHub version:
|
||||
# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them
|
||||
# per feedback_gitea_workflow_dispatch_inputs_unsupported).
|
||||
# - Dropped `merge_group:` (no Gitea merge queue).
|
||||
# - Dropped `environment:` blocks (Gitea has no environments).
|
||||
# - Workflow-level env.GITHUB_SERVER_URL pinned per
|
||||
# feedback_act_runner_github_server_url.
|
||||
# - `continue-on-error: true` on each job (RFC §1 contract).
|
||||
# - **Gitea workflow_run trigger limitation**: Gitea 1.22.6's support
|
||||
# for the `workflow_run` event is partial. If this never fires on a
|
||||
# real publish-workspace-server-image completion, the follow-up
|
||||
# triage PR should replace the trigger with a push-with-paths-filter
|
||||
# on the same publish workflow's path (i.e. `.gitea/workflows/publish-workspace-server-image.yml`).
|
||||
#
|
||||
|
||||
# Runs the canary smoke suite against the staging canary tenant fleet
|
||||
# after a new :staging-<sha> image lands in ECR. On green, calls the
|
||||
# CP redeploy-fleet endpoint to promote :staging-<sha> → :latest so
|
||||
# the prod tenant fleet's 5-minute auto-updater picks up the verified
|
||||
# digest. On red, :latest stays on the prior known-good digest and
|
||||
# prod is untouched.
|
||||
#
|
||||
# Registry note (2026-05-10): This workflow previously used GHCR
|
||||
# (ghcr.io/molecule-ai/platform-tenant) — that registry was retired
|
||||
# during the 2026-05-06 Gitea suspension migration when publish-
|
||||
# workspace-server-image.yml switched to the operator's ECR org
|
||||
# (153263036946.dkr.ecr.us-east-2.amazonaws.com/molecule-ai/
|
||||
# platform-tenant). The GHCR → ECR migration was never applied to
|
||||
# this file, so canary-verify was silently smoke-testing the stale
|
||||
# GHCR image while the actual staging/prod tenants ran the ECR image.
|
||||
# Result: smoke tests could not catch a broken ECR build. Fix:
|
||||
# - Wait step: reads SHA from running canary /health (tenant-
|
||||
# agnostic, works regardless of registry).
|
||||
# - Promote step: calls CP redeploy-fleet endpoint with target_tag=
|
||||
# staging-<sha>, same mechanism as redeploy-tenants-on-main.yml.
|
||||
# No longer attempts GHCR crane ops.
|
||||
#
|
||||
# Dependencies:
|
||||
# - publish-workspace-server-image.yml publishes :staging-<sha>
|
||||
# to ECR on staging and main merges.
|
||||
# - Canary tenants are configured to pull :staging-<sha> from ECR
|
||||
# (TENANT_IMAGE env set to the ECR :staging-<sha> tag).
|
||||
# - Repo secrets CANARY_TENANT_URLS / CANARY_ADMIN_TOKENS /
|
||||
# CANARY_CP_SHARED_SECRET are populated.
|
||||
|
||||
on:
|
||||
workflow_run:
|
||||
workflows: ["publish-workspace-server-image"]
|
||||
types: [completed]
|
||||
permissions:
|
||||
contents: read
|
||||
packages: write
|
||||
actions: read
|
||||
|
||||
env:
|
||||
# ECR registry (post-2026-05-06 SSOT for tenant images).
|
||||
# publish-workspace-server-image.yml pushes here.
|
||||
IMAGE_NAME: 153263036946.dkr.ecr.us-east-2.amazonaws.com/molecule-ai/platform
|
||||
TENANT_IMAGE_NAME: 153263036946.dkr.ecr.us-east-2.amazonaws.com/molecule-ai/platform-tenant
|
||||
# CP endpoint for redeploy-fleet (used in promote step below).
|
||||
CP_URL: ${{ vars.CP_URL || 'https://staging-api.moleculesai.app' }}
|
||||
GITHUB_SERVER_URL: https://git.moleculesai.app
|
||||
|
||||
jobs:
|
||||
canary-smoke:
|
||||
# Skip when the upstream workflow failed — no image to test against.
|
||||
# workflow_dispatch trigger dropped in this Gitea port; only the
|
||||
# workflow_run path remains.
|
||||
if: ${{ github.event.workflow_run.conclusion == 'success' }}
|
||||
runs-on: ubuntu-latest
|
||||
# Phase 3 (RFC #219 §1): surface broken workflows without blocking.
|
||||
continue-on-error: true
|
||||
outputs:
|
||||
sha: ${{ steps.compute.outputs.sha }}
|
||||
smoke_ran: ${{ steps.smoke.outputs.ran }}
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
|
||||
- name: Compute sha
|
||||
id: compute
|
||||
run: echo "sha=${GITHUB_SHA::7}" >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Wait for canary tenants to pick up :staging-<sha>
|
||||
# Poll canary health endpoints every 30s for up to 7 min instead
|
||||
# of a fixed 6-min sleep. Exits as soon as ALL canaries report
|
||||
# the new SHA (~2-3 min typical vs 6 min fixed). Falls back to
|
||||
# proceeding after 7 min even if not all canaries responded —
|
||||
# the smoke suite will catch any that didn't update.
|
||||
#
|
||||
# NOTE: The SHA is read from the running tenant's /health response,
|
||||
# NOT from a registry lookup. This is registry-agnostic and works
|
||||
# regardless of whether the tenant pulls from ECR, GHCR, or any
|
||||
# other registry — the canary is telling us what it's actually
|
||||
# running, which is the ground truth for smoke testing.
|
||||
env:
|
||||
CANARY_TENANT_URLS: ${{ secrets.CANARY_TENANT_URLS }}
|
||||
EXPECTED_SHA: ${{ steps.compute.outputs.sha }}
|
||||
run: |
|
||||
if [ -z "$CANARY_TENANT_URLS" ]; then
|
||||
echo "No canary URLs configured — falling back to 60s wait"
|
||||
sleep 60
|
||||
exit 0
|
||||
fi
|
||||
IFS=',' read -ra URLS <<< "$CANARY_TENANT_URLS"
|
||||
MAX_WAIT=420 # 7 minutes
|
||||
INTERVAL=30
|
||||
ELAPSED=0
|
||||
while [ $ELAPSED -lt $MAX_WAIT ]; do
|
||||
ALL_READY=true
|
||||
for url in "${URLS[@]}"; do
|
||||
HEALTH=$(curl -s --max-time 5 "${url}/health" 2>/dev/null || echo "{}")
|
||||
SHA=$(echo "$HEALTH" | grep -o "\"sha\":\"[^\"]*\"" | head -1 | cut -d'"' -f4)
|
||||
if [ "$SHA" != "$EXPECTED_SHA" ]; then
|
||||
ALL_READY=false
|
||||
break
|
||||
fi
|
||||
done
|
||||
if $ALL_READY; then
|
||||
echo "All canaries running staging-${EXPECTED_SHA} after ${ELAPSED}s"
|
||||
exit 0
|
||||
fi
|
||||
echo "Waiting for canaries... (${ELAPSED}s / ${MAX_WAIT}s)"
|
||||
sleep $INTERVAL
|
||||
ELAPSED=$((ELAPSED + INTERVAL))
|
||||
done
|
||||
echo "Timeout after ${MAX_WAIT}s — proceeding anyway (smoke suite will validate)"
|
||||
|
||||
- name: Run canary smoke suite
|
||||
id: smoke
|
||||
# Graceful-skip when no canary fleet is configured (Phase 2 not yet
|
||||
# stood up — see molecule-controlplane/docs/canary-tenants.md).
|
||||
# Sets `ran=false` on skip so promote-to-latest stays off (we don't
|
||||
# want every main merge auto-promoting without gating). Manual
|
||||
# promote-latest.yml is the release gate while canary is absent.
|
||||
# Once the fleet is real: delete the early-exit branch.
|
||||
env:
|
||||
CANARY_TENANT_URLS: ${{ secrets.CANARY_TENANT_URLS }}
|
||||
CANARY_ADMIN_TOKENS: ${{ secrets.CANARY_ADMIN_TOKENS }}
|
||||
CANARY_CP_BASE_URL: https://staging-api.moleculesai.app
|
||||
CANARY_CP_SHARED_SECRET: ${{ secrets.CANARY_CP_SHARED_SECRET }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
if [ -z "${CANARY_TENANT_URLS:-}" ] \
|
||||
|| [ -z "${CANARY_ADMIN_TOKENS:-}" ] \
|
||||
|| [ -z "${CANARY_CP_SHARED_SECRET:-}" ]; then
|
||||
{
|
||||
echo "## ⚠️ canary-verify skipped"
|
||||
echo
|
||||
echo "One or more canary secrets are unset (\`CANARY_TENANT_URLS\`, \`CANARY_ADMIN_TOKENS\`, \`CANARY_CP_SHARED_SECRET\`)."
|
||||
echo "Phase 2 canary fleet has not been stood up yet —"
|
||||
echo "see [canary-tenants.md](https://git.moleculesai.app/molecule-ai/molecule-controlplane/blob/main/docs/canary-tenants.md)."
|
||||
echo
|
||||
echo "**Skipped — promote-to-latest will NOT auto-fire.** Dispatch \`promote-latest.yml\` manually when ready."
|
||||
} >> "$GITHUB_STEP_SUMMARY"
|
||||
echo "ran=false" >> "$GITHUB_OUTPUT"
|
||||
echo "::notice::canary-verify: skipped — no canary fleet configured"
|
||||
exit 0
|
||||
fi
|
||||
bash scripts/canary-smoke.sh
|
||||
echo "ran=true" >> "$GITHUB_OUTPUT"
|
||||
|
||||
- name: Summary on failure
|
||||
if: ${{ failure() }}
|
||||
run: |
|
||||
{
|
||||
echo "## Canary smoke FAILED"
|
||||
echo
|
||||
echo "Canary tenants rejected image \`staging-${{ steps.compute.outputs.sha }}\`."
|
||||
echo ":latest stays pinned to the prior good digest — prod is untouched."
|
||||
echo
|
||||
echo "Fix forward and merge again, or investigate the specific failed"
|
||||
echo "assertions in the canary-smoke step log above."
|
||||
} >> "$GITHUB_STEP_SUMMARY"
|
||||
|
||||
promote-to-latest:
|
||||
# On green, calls the CP redeploy-fleet endpoint with target_tag=
|
||||
# staging-<sha> to promote the verified ECR image. This is the same
|
||||
# mechanism as redeploy-tenants-on-main.yml — no GHCR crane ops.
|
||||
#
|
||||
# Pre-fix history: the old GHCR promote step used `crane tag` against
|
||||
# ghcr.io/molecule-ai/platform-tenant, but publish-workspace-server-
|
||||
# image.yml had already migrated to ECR on 2026-05-07 (commit
|
||||
# 10e510f5). The GHCR tags were never updated, so this step was
|
||||
# silently promoting a stale GHCR image while actual prod tenants
|
||||
# pulled from ECR. Canary smoke tests were GHCR-targeted and could
|
||||
# not catch a broken ECR build.
|
||||
needs: canary-smoke
|
||||
if: ${{ needs.canary-smoke.result == 'success' && needs.canary-smoke.outputs.smoke_ran == 'true' }}
|
||||
runs-on: ubuntu-latest
|
||||
# Phase 3 (RFC #219 §1): surface broken workflows without blocking.
|
||||
continue-on-error: true
|
||||
env:
|
||||
SHA: ${{ needs.canary-smoke.outputs.sha }}
|
||||
CP_URL: ${{ vars.CP_URL || 'https://staging-api.moleculesai.app' }}
|
||||
# CP_ADMIN_API_TOKEN gates write access to the redeploy endpoint.
|
||||
# Stored at the repo level so all workflows pick it up automatically.
|
||||
CP_ADMIN_API_TOKEN: ${{ secrets.CP_ADMIN_API_TOKEN }}
|
||||
# canary_slug pin: deploy the verified :staging-<sha> to the canary
|
||||
# first (soak 120s), then fan out to the rest of the fleet.
|
||||
CANARY_SLUG: ${{ vars.CANARY_PROMOTE_SLUG || '' }}
|
||||
SOAK_SECONDS: ${{ vars.CANARY_PROMOTE_SOAK || '120' }}
|
||||
BATCH_SIZE: ${{ vars.CANARY_PROMOTE_BATCH || '3' }}
|
||||
steps:
|
||||
- name: Check CP credentials
|
||||
run: |
|
||||
if [ -z "${CP_ADMIN_API_TOKEN:-}" ]; then
|
||||
echo "::error::CP_ADMIN_API_TOKEN secret is not set — promote step cannot call redeploy-fleet."
|
||||
echo "::error::Set it at: repo Settings → Actions → Variables and Secrets → New Secret."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Promote verified ECR image to :latest
|
||||
run: |
|
||||
set -euo pipefail
|
||||
|
||||
TARGET_TAG="staging-${SHA}"
|
||||
BODY=$(jq -nc \
|
||||
--arg tag "$TARGET_TAG" \
|
||||
--argjson soak "${SOAK_SECONDS:-120}" \
|
||||
--argjson batch "${BATCH_SIZE:-3}" \
|
||||
--argjson dry false \
|
||||
'{
|
||||
target_tag: $tag,
|
||||
soak_seconds: $soak,
|
||||
batch_size: $batch,
|
||||
dry_run: $dry
|
||||
}')
|
||||
|
||||
if [ -n "${CANARY_SLUG:-}" ]; then
|
||||
BODY=$(jq '. * {canary_slug: $slug}' --arg slug "$CANARY_SLUG" <<<"$BODY")
|
||||
fi
|
||||
|
||||
echo "Calling: POST $CP_URL/cp/admin/tenants/redeploy-fleet"
|
||||
echo " target_tag: $TARGET_TAG"
|
||||
echo " body: $BODY"
|
||||
|
||||
HTTP_RESPONSE=$(mktemp)
|
||||
HTTP_CODE_FILE=$(mktemp)
|
||||
set +e
|
||||
curl -sS -o "$HTTP_RESPONSE" -w '%{http_code}' \
|
||||
-m 1200 \
|
||||
-H "Authorization: Bearer $CP_ADMIN_API_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-X POST "$CP_URL/cp/admin/tenants/redeploy-fleet" \
|
||||
-d "$BODY" >"$HTTP_CODE_FILE"
|
||||
CURL_EXIT=$?
|
||||
set -e
|
||||
|
||||
HTTP_CODE=$(cat "$HTTP_CODE_FILE" 2>/dev/null || echo "000")
|
||||
[ -z "$HTTP_CODE" ] && HTTP_CODE="000"
|
||||
|
||||
echo "HTTP $HTTP_CODE (curl exit $CURL_EXIT)"
|
||||
cat "$HTTP_RESPONSE" | jq . || cat "$HTTP_RESPONSE"
|
||||
|
||||
if [ "$HTTP_CODE" -ge 400 ]; then
|
||||
echo "::error::CP redeploy-fleet returned HTTP $HTTP_CODE — refusing to proceed."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Summary
|
||||
run: |
|
||||
{
|
||||
echo "## Canary verified — :latest promoted via CP redeploy-fleet"
|
||||
echo ""
|
||||
echo "- **Target tag:** \`staging-${{ needs.canary-smoke.outputs.sha }}\`"
|
||||
echo "- **Registry:** ECR (\`${TENANT_IMAGE_NAME}\`)"
|
||||
echo "- **Canary slug:** \`${CANARY_SLUG:-<none>}\` (soak ${SOAK_SECONDS}s)"
|
||||
echo "- **Batch size:** ${BATCH_SIZE:-3}"
|
||||
echo ""
|
||||
echo "CP redeploy-fleet is rolling out the verified image across the prod fleet."
|
||||
echo "The fleet's 5-minute health-check loop will pick up the update automatically."
|
||||
} >> "$GITHUB_STEP_SUMMARY"
|
||||
255
.gitea/workflows/continuous-synth-e2e.yml
Normal file
255
.gitea/workflows/continuous-synth-e2e.yml
Normal file
@ -0,0 +1,255 @@
|
||||
name: Continuous synthetic E2E (staging)
|
||||
|
||||
# Ported from .github/workflows/continuous-synth-e2e.yml on 2026-05-11 per RFC
|
||||
# internal#219 §1 sweep. Differences from the GitHub version:
|
||||
# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them
|
||||
# per feedback_gitea_workflow_dispatch_inputs_unsupported).
|
||||
# - Dropped `merge_group:` (no Gitea merge queue).
|
||||
# - Dropped `environment:` blocks (Gitea has no environments).
|
||||
# - Workflow-level env.GITHUB_SERVER_URL pinned per
|
||||
# feedback_act_runner_github_server_url.
|
||||
# - `continue-on-error: true` on each job (RFC §1 contract).
|
||||
#
|
||||
|
||||
# Hard gate (#2342): cron-driven full-lifecycle E2E that catches
|
||||
# regressions visible only at runtime — schema drift, deployment-pipeline
|
||||
# gaps, vendor outages, env-var rotations, DNS / CF / Railway side-effects.
|
||||
#
|
||||
# Why this gate exists:
|
||||
# PR-time CI catches code-level regressions but not deployment-time or
|
||||
# integration-time ones. Today's empirical data:
|
||||
# • #2345 (A2A v0.2 silent drop) — passed all unit tests, broke at
|
||||
# JSON-RPC parse layer between sender and receiver. Visible only
|
||||
# to a sender exercising the full path.
|
||||
# • RFC #2312 chat upload — landed on staging-branch but never
|
||||
# reached staging tenants because publish-workspace-server-image
|
||||
# was main-only. Caught by manual dogfooding hours after deploy.
|
||||
# Both would have surfaced within 15-20 min of regression if a
|
||||
# continuous synth-E2E was running.
|
||||
#
|
||||
# Cadence: every 20 min (3x/hour). The script is conservatively
|
||||
# bounded at 10 min wall-clock; even on degraded staging it should
|
||||
# finish before the next firing. cron-overlap is guarded by the
|
||||
# concurrency group below.
|
||||
#
|
||||
# Cost: ~3 runs/hour × 5-10 min × $0.008/min GHA = ~$0.50-$1/day.
|
||||
# Plus a fresh tenant provisioned + torn down each run (Railway +
|
||||
# AWS pennies). Negligible.
|
||||
#
|
||||
# Failure handling: when the run fails, the workflow exits non-zero
|
||||
# and GitHub's standard email/notification path fires. Operators
|
||||
# can subscribe to this workflow's failure channel for paging-grade
|
||||
# alerting.
|
||||
|
||||
on:
|
||||
schedule:
|
||||
# Every 10 minutes, on :02 :12 :22 :32 :42 :52. Three constraints:
|
||||
# 1. Stay off the top-of-hour. GitHub Actions scheduler drops
|
||||
# :00 firings under high load (own docs:
|
||||
# https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#schedule).
|
||||
# Prior history: cron was '0,20,40' (2026-05-02) — only :00
|
||||
# ever survived. Bumped to '10,30,50' (2026-05-03) on the
|
||||
# theory that further-from-:00 wins. Empirically 2026-05-04
|
||||
# that ALSO dropped to ~60 min effective cadence (only ~1
|
||||
# schedule fire per hour — see molecule-core#2726). Detection
|
||||
# latency was claimed 20 min, actual 60 min.
|
||||
# 2. Avoid colliding with the existing :15 sweep-cf-orphans
|
||||
# and :45 sweep-cf-tunnels — both hit the CF API and we
|
||||
# don't want to fight for rate-limit tokens.
|
||||
# 3. Avoid the :30 heavy slot (canary-staging /30, sweep-aws-
|
||||
# secrets, sweep-stale-e2e-orgs every :15) — multiple
|
||||
# overlapping cron registrations on the same minute is part
|
||||
# of what GH drops under load.
|
||||
# Solution: bump fires-per-hour 3 → 6 AND keep all slots in clean
|
||||
# lanes (1-3 min away from any other cron). Even with empirically-
|
||||
# observed ~67% GH drop ratio, 6 attempts/hour yields ~2 effective
|
||||
# fires = ~30 min cadence; closer to the 20-min target than the
|
||||
# current shape and provides a real degradation alarm if drops
|
||||
# get worse.
|
||||
- cron: '2,12,22,32,42,52 * * * *'
|
||||
permissions:
|
||||
contents: read
|
||||
# No issue-write here — failures surface as red runs in the workflow
|
||||
# history. If you want auto-issue-on-fail, add a follow-up step that
|
||||
# uses gh issue create gated on `if: failure()`. Keeping the surface
|
||||
# minimal until that's actually wanted.
|
||||
|
||||
# Serialize so two firings can never overlap. Cron firing every 20 min
|
||||
# but scripts conservatively bounded at 10 min — overlap shouldn't
|
||||
# happen in steady state, but if a run hangs we don't want N more
|
||||
# stacking up.
|
||||
concurrency:
|
||||
group: continuous-synth-e2e
|
||||
cancel-in-progress: false
|
||||
|
||||
env:
|
||||
GITHUB_SERVER_URL: https://git.moleculesai.app
|
||||
|
||||
jobs:
|
||||
synth:
|
||||
name: Synthetic E2E against staging
|
||||
runs-on: ubuntu-latest
|
||||
# Phase 3 (RFC #219 §1): surface broken workflows without blocking.
|
||||
continue-on-error: true
|
||||
# Bumped from 12 → 20 (2026-05-04). Tenant user-data install phase
|
||||
# (apt-get update + install docker.io/jq/awscli/caddy + snap install
|
||||
# ssm-agent) runs from raw Ubuntu on every boot — none of it is
|
||||
# pre-baked into the tenant AMI. Empirical fetch_secrets/ok timing
|
||||
# across today's canaries: 51s → 82s → 143s → 625s. apt-mirror tail
|
||||
# latency drives the boot-to-fetch_secrets phase from ~1min to >10min.
|
||||
# A 12min budget leaves only ~2min for the workspace (which needs
|
||||
# ~3.5min for claude-code cold boot) on slow-apt days, blowing the
|
||||
# budget. 20min absorbs the worst tenant tail so the workspace probe
|
||||
# gets the full ~7min it needs even on a slow apt day. Real fix:
|
||||
# pre-bake caddy + ssm-agent into the tenant AMI (controlplane#TBD).
|
||||
timeout-minutes: 20
|
||||
env:
|
||||
# claude-code default: cold-start ~5 min (comparable to langgraph),
|
||||
# but uses MiniMax-M2.7-highspeed via the template's third-party-
|
||||
# Anthropic-compat path (workspace-configs-templates/claude-code-
|
||||
# default/config.yaml:64-69). MiniMax is ~5-10x cheaper than
|
||||
# gpt-4.1-mini per token AND avoids the recurring OpenAI quota-
|
||||
# exhaustion class that took the canary down 2026-05-03 (#265).
|
||||
# Operators can pick langgraph / hermes via workflow_dispatch
|
||||
# when they specifically need to exercise the OpenAI or SDK-
|
||||
# native paths.
|
||||
E2E_RUNTIME: ${{ github.event.inputs.runtime || 'claude-code' }}
|
||||
# Pin the canary to a specific MiniMax model rather than relying
|
||||
# on the per-runtime default ("sonnet" → routes to direct
|
||||
# Anthropic, defeats the cost saving). Operators can override
|
||||
# via workflow_dispatch by setting a different E2E_MODEL_SLUG
|
||||
# input if they need to exercise a specific model. M2.7-highspeed
|
||||
# is "Token Plan only" but cheap-per-token and fast.
|
||||
E2E_MODEL_SLUG: ${{ github.event.inputs.model_slug || 'MiniMax-M2.7-highspeed' }}
|
||||
# Bound to 10 min so a stuck provision fails the run instead of
|
||||
# holding up the next cron firing. 15-min default in the script
|
||||
# is for the on-PR full lifecycle where we have more headroom.
|
||||
E2E_PROVISION_TIMEOUT_SECS: '600'
|
||||
# Slug suffix — namespaced "synth-" so these runs are
|
||||
# distinguishable from PR-driven runs in CP admin.
|
||||
E2E_RUN_ID: synth-${{ github.run_id }}
|
||||
# Forced false for cron; respected for manual dispatch
|
||||
E2E_KEEP_ORG: ${{ github.event.inputs.keep_org == 'true' && '1' || '' }}
|
||||
MOLECULE_CP_URL: ${{ vars.STAGING_CP_URL || 'https://staging-api.moleculesai.app' }}
|
||||
MOLECULE_ADMIN_TOKEN: ${{ secrets.CP_STAGING_ADMIN_API_TOKEN }}
|
||||
# MiniMax key is the canary's PRIMARY auth path. claude-code
|
||||
# template's `minimax` provider routes ANTHROPIC_BASE_URL to
|
||||
# api.minimax.io/anthropic and reads MINIMAX_API_KEY at boot.
|
||||
# tests/e2e/test_staging_full_saas.sh branches SECRETS_JSON on
|
||||
# which key is present — MiniMax wins when set.
|
||||
E2E_MINIMAX_API_KEY: ${{ secrets.MOLECULE_STAGING_MINIMAX_API_KEY }}
|
||||
# Direct-Anthropic alternative for operators who don't want to
|
||||
# set up a MiniMax account (priority below MiniMax — first
|
||||
# non-empty wins in test_staging_full_saas.sh's secrets-injection
|
||||
# block). See #2578 PR comment for the rationale.
|
||||
E2E_ANTHROPIC_API_KEY: ${{ secrets.MOLECULE_STAGING_ANTHROPIC_API_KEY }}
|
||||
# OpenAI fallback — kept wired so operators can dispatch with
|
||||
# E2E_RUNTIME=langgraph or =hermes and still have a working
|
||||
# canary path. The script picks the right blob shape based on
|
||||
# which key is non-empty.
|
||||
E2E_OPENAI_API_KEY: ${{ secrets.MOLECULE_STAGING_OPENAI_KEY }}
|
||||
steps:
|
||||
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
|
||||
- name: Verify required secrets present
|
||||
run: |
|
||||
# Hard-fail on missing secret REGARDLESS of trigger. Previously
|
||||
# this step soft-skipped on workflow_dispatch via `exit 0`, but
|
||||
# `exit 0` only ends the STEP — subsequent steps still ran with
|
||||
# the empty secret, the synth script fell through to the wrong
|
||||
# SECRETS_JSON branch, and the canary failed 5 min later with a
|
||||
# confusing "Agent error (Exception)" instead of the clean
|
||||
# "secret missing" message at the top. Caught 2026-05-04 by
|
||||
# dispatched run 25296530706: claude-code + missing MINIMAX
|
||||
# silently used OpenAI keys but kept model=MiniMax-M2.7, then
|
||||
# the workspace 401'd against MiniMax once it tried to call.
|
||||
# Fix: exit 1 in both cron and dispatch paths. Operators who
|
||||
# want to verify a YAML change without setting up the secret
|
||||
# can read the verify-secrets step's stderr — the failure is
|
||||
# itself the verification signal.
|
||||
if [ -z "${MOLECULE_ADMIN_TOKEN:-}" ]; then
|
||||
echo "::error::CP_STAGING_ADMIN_API_TOKEN secret missing — synth E2E cannot run"
|
||||
echo "::error::Set it at Settings → Secrets and Variables → Actions; pull from staging-CP's CP_ADMIN_API_TOKEN env in Railway."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# LLM-key requirement is per-runtime: claude-code accepts
|
||||
# EITHER MiniMax OR direct-Anthropic (whichever is set first),
|
||||
# langgraph + hermes use OpenAI (MOLECULE_STAGING_OPENAI_KEY).
|
||||
case "${E2E_RUNTIME}" in
|
||||
claude-code)
|
||||
if [ -n "${E2E_MINIMAX_API_KEY:-}" ]; then
|
||||
required_secret_name="MOLECULE_STAGING_MINIMAX_API_KEY"
|
||||
required_secret_value="${E2E_MINIMAX_API_KEY}"
|
||||
elif [ -n "${E2E_ANTHROPIC_API_KEY:-}" ]; then
|
||||
required_secret_name="MOLECULE_STAGING_ANTHROPIC_API_KEY"
|
||||
required_secret_value="${E2E_ANTHROPIC_API_KEY}"
|
||||
else
|
||||
required_secret_name="MOLECULE_STAGING_MINIMAX_API_KEY or MOLECULE_STAGING_ANTHROPIC_API_KEY"
|
||||
required_secret_value=""
|
||||
fi
|
||||
;;
|
||||
langgraph|hermes)
|
||||
required_secret_name="MOLECULE_STAGING_OPENAI_KEY"
|
||||
required_secret_value="${E2E_OPENAI_API_KEY:-}"
|
||||
;;
|
||||
*)
|
||||
echo "::warning::Unknown E2E_RUNTIME='${E2E_RUNTIME}' — skipping LLM-key check"
|
||||
required_secret_name=""
|
||||
required_secret_value="present"
|
||||
;;
|
||||
esac
|
||||
if [ -n "$required_secret_name" ] && [ -z "$required_secret_value" ]; then
|
||||
echo "::error::${required_secret_name} secret missing — runtime=${E2E_RUNTIME} cannot authenticate against its LLM provider"
|
||||
echo "::error::Set it at Settings → Secrets and Variables → Actions, OR dispatch with a different runtime"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Install required tools
|
||||
run: |
|
||||
# The script depends on jq + curl (already on ubuntu-latest)
|
||||
# and python3 (likewise). Verify they're all present so we
|
||||
# fail fast on a runner image regression rather than mid-script.
|
||||
for cmd in jq curl python3; do
|
||||
command -v "$cmd" >/dev/null 2>&1 || {
|
||||
echo "::error::required tool '$cmd' not on PATH — runner image regression?"
|
||||
exit 1
|
||||
}
|
||||
done
|
||||
|
||||
- name: Run synthetic E2E
|
||||
# The script handles its own teardown via EXIT trap; even on
|
||||
# failure (timeout, assertion), the org is deprovisioned and
|
||||
# leaks are reported. Exit code propagates from the script.
|
||||
run: |
|
||||
bash tests/e2e/test_staging_full_saas.sh
|
||||
|
||||
- name: Failure summary
|
||||
# Runs only on failure. Adds a job summary so the workflow run
|
||||
# page shows a quick "what happened" instead of forcing readers
|
||||
# to scroll through script output.
|
||||
if: failure()
|
||||
run: |
|
||||
{
|
||||
echo "## Continuous synth E2E failed"
|
||||
echo ""
|
||||
echo "**Run ID:** ${{ github.run_id }}"
|
||||
echo "**Trigger:** ${{ github.event_name }}"
|
||||
echo "**Runtime:** ${E2E_RUNTIME}"
|
||||
echo "**Slug:** synth-${{ github.run_id }}"
|
||||
echo ""
|
||||
echo "### What this means"
|
||||
echo ""
|
||||
echo "Staging just regressed on a path that previously worked. Likely classes:"
|
||||
echo "- Schema mismatch between sender and receiver (#2345 class)"
|
||||
echo "- Deployment-pipeline gap (RFC #2312 / staging-tenant-image-stale class)"
|
||||
echo "- Vendor outage (Cloudflare, Railway, AWS, GHCR)"
|
||||
echo "- Staging-CP env var rotation"
|
||||
echo ""
|
||||
echo "### Next steps"
|
||||
echo ""
|
||||
echo "1. Check the script output above for the assertion that failed"
|
||||
echo "2. If it's a vendor outage, no action needed — next firing in ~20 min"
|
||||
echo "3. If it's a code regression, find the causing PR via \`git log\` against last green run and revert/fix"
|
||||
echo "4. Keep an eye on the next 1-2 firings — flake vs persistent fail differs in priority"
|
||||
} >> "$GITHUB_STEP_SUMMARY"
|
||||
333
.gitea/workflows/e2e-api.yml
Normal file
333
.gitea/workflows/e2e-api.yml
Normal file
@ -0,0 +1,333 @@
|
||||
name: E2E API Smoke Test
|
||||
|
||||
# Ported from .github/workflows/e2e-api.yml on 2026-05-11 per RFC
|
||||
# internal#219 §1 sweep. Differences from the GitHub version:
|
||||
# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them
|
||||
# per feedback_gitea_workflow_dispatch_inputs_unsupported).
|
||||
# - Dropped `merge_group:` (no Gitea merge queue).
|
||||
# - Dropped `environment:` blocks (Gitea has no environments).
|
||||
# - Workflow-level env.GITHUB_SERVER_URL pinned per
|
||||
# feedback_act_runner_github_server_url.
|
||||
# - `continue-on-error: true` on each job (RFC §1 contract).
|
||||
#
|
||||
# Extracted from ci.yml so workflow-level concurrency can protect this job
|
||||
# from run-level cancellation (issue #458).
|
||||
#
|
||||
# Trigger model (revised 2026-04-29):
|
||||
#
|
||||
# Always FIRES on push/pull_request to staging+main. Real work is gated
|
||||
# per-step on `needs.detect-changes.outputs.api` — when paths under
|
||||
# `workspace-server/`, `tests/e2e/`, or this workflow file haven't
|
||||
# changed, the no-op step alone runs and emits SUCCESS for the
|
||||
# `E2E API Smoke Test` check, satisfying branch protection without
|
||||
# spending CI cycles. See the in-job comment on the `e2e-api` job for
|
||||
# why this is one job (not two-jobs-sharing-name) and the 2026-04-29
|
||||
# PR #2264 incident that drove the consolidation.
|
||||
#
|
||||
# Parallel-safety (Class B Hongming-owned CICD red sweep, 2026-05-08)
|
||||
# -------------------------------------------------------------------
|
||||
# Same substrate hazard as PR #98 (handlers-postgres-integration). Our
|
||||
# Gitea act_runner runs with `container.network: host` (operator host
|
||||
# `/opt/molecule/runners/config.yaml`), which means:
|
||||
#
|
||||
# * Two concurrent runs both try to bind their `-p 15432:5432` /
|
||||
# `-p 16379:6379` host ports — the second postgres/redis FATALs
|
||||
# with `Address in use` and `docker run` returns exit 125 with
|
||||
# `Conflict. The container name "/molecule-ci-postgres" is already
|
||||
# in use by container ...`. Verified in run a7/2727 on 2026-05-07.
|
||||
# * The fixed container names `molecule-ci-postgres` / `-redis` (the
|
||||
# pre-fix shape) collide on name AS WELL AS port. The cleanup-with-
|
||||
# `docker rm -f` at the start of the second job KILLS the first
|
||||
# job's still-running postgres/redis.
|
||||
#
|
||||
# Fix shape (mirrors PR #98's bridge-net pattern, adapted because
|
||||
# platform-server is a Go binary on the host, not a containerised
|
||||
# step):
|
||||
#
|
||||
# 1. Unique container names per run:
|
||||
# pg-e2e-api-${RUN_ID}-${RUN_ATTEMPT}
|
||||
# redis-e2e-api-${RUN_ID}-${RUN_ATTEMPT}
|
||||
# `${RUN_ID}-${RUN_ATTEMPT}` is unique even across reruns of the
|
||||
# same run_id.
|
||||
# 2. Ephemeral host port per run (`-p 0:5432`), then read the actual
|
||||
# bound port via `docker port` and export DATABASE_URL/REDIS_URL
|
||||
# pointing at it. No fixed host-port → no port collision.
|
||||
# 3. `127.0.0.1` (NOT `localhost`) in URLs — IPv6 first-resolve was
|
||||
# the original flake fixed in #92 and the script's still IPv6-
|
||||
# enabled.
|
||||
# 4. `if: always()` cleanup so containers don't leak when test steps
|
||||
# fail.
|
||||
#
|
||||
# Issue #94 items #2 + #3 (also fixed here):
|
||||
# * Pre-pull `alpine:latest` so the platform-server's provisioner
|
||||
# (`internal/handlers/container_files.go`) can stand up its
|
||||
# ephemeral token-write helper without a daemon.io round-trip.
|
||||
# * Create `molecule-core-net` bridge network if missing so the
|
||||
# provisioner's container.HostConfig {NetworkMode: ...} attach
|
||||
# succeeds.
|
||||
# Item #1 (timeouts) — evidence on recent runs (77/3191, ae/4270, 0e/
|
||||
# 2318) shows Postgres ready in 3s, Redis in 1s, Platform in 1s when
|
||||
# they DO come up. Timeouts are not the bottleneck; not bumped.
|
||||
#
|
||||
# Item explicitly NOT fixed here: failing test `Status back online`
|
||||
# fails because the platform's langgraph workspace template image
|
||||
# (ghcr.io/molecule-ai/workspace-template-langgraph:latest) returns
|
||||
# 403 Forbidden post-2026-05-06 GitHub org suspension. That is a
|
||||
# template-registry resolution issue (ADR-002 / local-build mode) and
|
||||
# belongs in a separate change that touches workspace-server, not
|
||||
# this workflow file.
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main, staging]
|
||||
pull_request:
|
||||
branches: [main, staging]
|
||||
concurrency:
|
||||
# Per-SHA grouping (changed 2026-04-28 from per-ref). Per-ref had the
|
||||
# same auto-promote-staging brittleness as e2e-staging-canvas — back-
|
||||
# to-back staging pushes share refs/heads/staging, so the older push's
|
||||
# queued run gets cancelled when a newer push lands. Auto-promote-
|
||||
# staging then sees `completed/cancelled` for the older SHA and stays
|
||||
# put; the newer SHA's gates may eventually save the day, but if the
|
||||
# newer push gets cancelled too, we deadlock.
|
||||
#
|
||||
# See e2e-staging-canvas.yml's identical concurrency block for the full
|
||||
# rationale and the 2026-04-28 incident reference.
|
||||
group: e2e-api-${{ github.event.pull_request.head.sha || github.sha }}
|
||||
cancel-in-progress: false
|
||||
|
||||
env:
|
||||
GITHUB_SERVER_URL: https://git.moleculesai.app
|
||||
|
||||
jobs:
|
||||
detect-changes:
|
||||
runs-on: ubuntu-latest
|
||||
# Phase 3 (RFC #219 §1): surface broken workflows without blocking.
|
||||
continue-on-error: true
|
||||
outputs:
|
||||
api: ${{ steps.decide.outputs.api }}
|
||||
steps:
|
||||
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- id: decide
|
||||
# Inline replacement for dorny/paths-filter — same pattern PR#372's
|
||||
# ci.yml port used. Diffs against the PR base or push BEFORE SHA,
|
||||
# then matches against the api-relevant path set.
|
||||
run: |
|
||||
BASE="${GITHUB_BASE_REF:-${{ github.event.before }}}"
|
||||
if [ "${{ github.event_name }}" = "pull_request" ] && [ -n "${{ github.event.pull_request.base.sha }}" ]; then
|
||||
BASE="${{ github.event.pull_request.base.sha }}"
|
||||
fi
|
||||
if [ -z "$BASE" ] || echo "$BASE" | grep -qE '^0+$'; then
|
||||
echo "api=true" >> "$GITHUB_OUTPUT"
|
||||
exit 0
|
||||
fi
|
||||
if ! git cat-file -e "$BASE" 2>/dev/null; then
|
||||
git fetch --depth=1 origin "$BASE" 2>/dev/null || true
|
||||
fi
|
||||
if ! git cat-file -e "$BASE" 2>/dev/null; then
|
||||
echo "api=true" >> "$GITHUB_OUTPUT"
|
||||
exit 0
|
||||
fi
|
||||
CHANGED=$(git diff --name-only "$BASE" HEAD)
|
||||
if echo "$CHANGED" | grep -qE '^(workspace-server/|tests/e2e/|\.gitea/workflows/e2e-api\.yml$)'; then
|
||||
echo "api=true" >> "$GITHUB_OUTPUT"
|
||||
else
|
||||
echo "api=false" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
|
||||
# ONE job (no job-level `if:`) that always runs and reports under the
|
||||
# required-check name `E2E API Smoke Test`. Real work is gated per-step
|
||||
# on `needs.detect-changes.outputs.api`. Reason: GitHub registers a
|
||||
# check run for every job that matches `name:`, and a job-level
|
||||
# `if: false` produces a SKIPPED check run. Branch protection treats
|
||||
# all check runs with a matching context name on the latest commit as a
|
||||
# SET — any SKIPPED in the set fails the required-check eval, even with
|
||||
# SUCCESS siblings. Verified 2026-04-29 on PR #2264 (staging→main):
|
||||
# 4 check runs (2 SKIPPED + 2 SUCCESS) at the head SHA blocked
|
||||
# promotion despite all real work succeeding. Collapsing to a single
|
||||
# always-running job with conditional steps emits exactly one SUCCESS
|
||||
# check run regardless of paths filter — branch-protection-clean.
|
||||
e2e-api:
|
||||
needs: detect-changes
|
||||
name: E2E API Smoke Test
|
||||
runs-on: ubuntu-latest
|
||||
# Phase 3 (RFC #219 §1): surface broken workflows without blocking.
|
||||
continue-on-error: true
|
||||
timeout-minutes: 15
|
||||
env:
|
||||
# Unique per-run container names so concurrent runs on the host-
|
||||
# network act_runner don't collide on name OR port.
|
||||
# `${RUN_ID}-${RUN_ATTEMPT}` stays unique across reruns of the
|
||||
# same run_id. PORT is set later (after docker port lookup) since
|
||||
# we let Docker assign an ephemeral host port.
|
||||
PG_CONTAINER: pg-e2e-api-${{ github.run_id }}-${{ github.run_attempt }}
|
||||
REDIS_CONTAINER: redis-e2e-api-${{ github.run_id }}-${{ github.run_attempt }}
|
||||
PORT: "8080"
|
||||
steps:
|
||||
- name: No-op pass (paths filter excluded this commit)
|
||||
if: needs.detect-changes.outputs.api != 'true'
|
||||
run: |
|
||||
echo "No workspace-server / tests/e2e / workflow changes — E2E API gate satisfied without running tests."
|
||||
echo "::notice::E2E API Smoke Test no-op pass (paths filter excluded this commit)."
|
||||
- if: needs.detect-changes.outputs.api == 'true'
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
- if: needs.detect-changes.outputs.api == 'true'
|
||||
uses: actions/setup-go@40f1582b2485089dde7abd97c1529aa768e1baff # v5
|
||||
with:
|
||||
go-version: 'stable'
|
||||
cache: true
|
||||
cache-dependency-path: workspace-server/go.sum
|
||||
- name: Pre-pull alpine + ensure provisioner network (Issue #94 items #2 + #3)
|
||||
if: needs.detect-changes.outputs.api == 'true'
|
||||
run: |
|
||||
# Provisioner uses alpine:latest for ephemeral token-write
|
||||
# containers (workspace-server/internal/handlers/container_files.go).
|
||||
# Pre-pull so the first provision in test_api.sh doesn't race
|
||||
# the daemon's pull cache. Idempotent — `docker pull` is a no-op
|
||||
# when the image is already present.
|
||||
docker pull alpine:latest >/dev/null
|
||||
# Provisioner attaches workspace containers to
|
||||
# molecule-core-net (workspace-server/internal/provisioner/
|
||||
# provisioner.go::DefaultNetwork). The bridge already exists on
|
||||
# the operator host's docker daemon — `network create` is
|
||||
# idempotent via `|| true`.
|
||||
docker network create molecule-core-net >/dev/null 2>&1 || true
|
||||
echo "alpine:latest pre-pulled; molecule-core-net ensured."
|
||||
- name: Start Postgres (docker)
|
||||
if: needs.detect-changes.outputs.api == 'true'
|
||||
run: |
|
||||
# Defensive cleanup — only matches THIS run's container name,
|
||||
# so it cannot kill a sibling run's postgres. (Pre-fix the
|
||||
# name was static and this rm hit other runs' containers.)
|
||||
docker rm -f "$PG_CONTAINER" 2>/dev/null || true
|
||||
# `-p 0:5432` requests an ephemeral host port; we read it back
|
||||
# below and export DATABASE_URL.
|
||||
docker run -d --name "$PG_CONTAINER" \
|
||||
-e POSTGRES_USER=dev -e POSTGRES_PASSWORD=dev -e POSTGRES_DB=molecule \
|
||||
-p 0:5432 postgres:16 >/dev/null
|
||||
# Resolve the host-side port assignment. `docker port` prints
|
||||
# `0.0.0.0:NNNN` (and on host-net runners may also print an
|
||||
# IPv6 line — take the first IPv4 line).
|
||||
PG_PORT=$(docker port "$PG_CONTAINER" 5432/tcp | awk -F: '/^0\.0\.0\.0:/ {print $2; exit}')
|
||||
if [ -z "$PG_PORT" ]; then
|
||||
# Fallback: any first line. Some Docker versions print only
|
||||
# one line.
|
||||
PG_PORT=$(docker port "$PG_CONTAINER" 5432/tcp | head -1 | awk -F: '{print $NF}')
|
||||
fi
|
||||
if [ -z "$PG_PORT" ]; then
|
||||
echo "::error::Could not resolve host port for $PG_CONTAINER"
|
||||
docker port "$PG_CONTAINER" 5432/tcp || true
|
||||
docker logs "$PG_CONTAINER" || true
|
||||
exit 1
|
||||
fi
|
||||
# 127.0.0.1 (NOT localhost) — IPv6 first-resolve flake (#92).
|
||||
echo "PG_PORT=${PG_PORT}" >> "$GITHUB_ENV"
|
||||
echo "DATABASE_URL=postgres://dev:dev@127.0.0.1:${PG_PORT}/molecule?sslmode=disable" >> "$GITHUB_ENV"
|
||||
echo "Postgres host port: ${PG_PORT}"
|
||||
for i in $(seq 1 30); do
|
||||
if docker exec "$PG_CONTAINER" pg_isready -U dev >/dev/null 2>&1; then
|
||||
echo "Postgres ready after ${i}s"
|
||||
exit 0
|
||||
fi
|
||||
sleep 1
|
||||
done
|
||||
echo "::error::Postgres did not become ready in 30s"
|
||||
docker logs "$PG_CONTAINER" || true
|
||||
exit 1
|
||||
- name: Start Redis (docker)
|
||||
if: needs.detect-changes.outputs.api == 'true'
|
||||
run: |
|
||||
docker rm -f "$REDIS_CONTAINER" 2>/dev/null || true
|
||||
docker run -d --name "$REDIS_CONTAINER" -p 0:6379 redis:7 >/dev/null
|
||||
REDIS_PORT=$(docker port "$REDIS_CONTAINER" 6379/tcp | awk -F: '/^0\.0\.0\.0:/ {print $2; exit}')
|
||||
if [ -z "$REDIS_PORT" ]; then
|
||||
REDIS_PORT=$(docker port "$REDIS_CONTAINER" 6379/tcp | head -1 | awk -F: '{print $NF}')
|
||||
fi
|
||||
if [ -z "$REDIS_PORT" ]; then
|
||||
echo "::error::Could not resolve host port for $REDIS_CONTAINER"
|
||||
docker port "$REDIS_CONTAINER" 6379/tcp || true
|
||||
docker logs "$REDIS_CONTAINER" || true
|
||||
exit 1
|
||||
fi
|
||||
echo "REDIS_PORT=${REDIS_PORT}" >> "$GITHUB_ENV"
|
||||
echo "REDIS_URL=redis://127.0.0.1:${REDIS_PORT}" >> "$GITHUB_ENV"
|
||||
echo "Redis host port: ${REDIS_PORT}"
|
||||
for i in $(seq 1 15); do
|
||||
if docker exec "$REDIS_CONTAINER" redis-cli ping 2>/dev/null | grep -q PONG; then
|
||||
echo "Redis ready after ${i}s"
|
||||
exit 0
|
||||
fi
|
||||
sleep 1
|
||||
done
|
||||
echo "::error::Redis did not become ready in 15s"
|
||||
docker logs "$REDIS_CONTAINER" || true
|
||||
exit 1
|
||||
- name: Build platform
|
||||
if: needs.detect-changes.outputs.api == 'true'
|
||||
working-directory: workspace-server
|
||||
run: go build -o platform-server ./cmd/server
|
||||
- name: Start platform (background)
|
||||
if: needs.detect-changes.outputs.api == 'true'
|
||||
working-directory: workspace-server
|
||||
run: |
|
||||
# DATABASE_URL + REDIS_URL exported by the start-postgres /
|
||||
# start-redis steps point at this run's per-run host ports.
|
||||
./platform-server > platform.log 2>&1 &
|
||||
echo $! > platform.pid
|
||||
- name: Wait for /health
|
||||
if: needs.detect-changes.outputs.api == 'true'
|
||||
run: |
|
||||
for i in $(seq 1 30); do
|
||||
if curl -sf http://127.0.0.1:8080/health > /dev/null; then
|
||||
echo "Platform up after ${i}s"
|
||||
exit 0
|
||||
fi
|
||||
sleep 1
|
||||
done
|
||||
echo "::error::Platform did not become healthy in 30s"
|
||||
cat workspace-server/platform.log || true
|
||||
exit 1
|
||||
- name: Assert migrations applied
|
||||
if: needs.detect-changes.outputs.api == 'true'
|
||||
run: |
|
||||
tables=$(docker exec "$PG_CONTAINER" psql -U dev -d molecule -tAc "SELECT count(*) FROM information_schema.tables WHERE table_schema='public' AND table_name='workspaces'")
|
||||
if [ "$tables" != "1" ]; then
|
||||
echo "::error::Migrations did not apply"
|
||||
cat workspace-server/platform.log || true
|
||||
exit 1
|
||||
fi
|
||||
echo "Migrations OK"
|
||||
- name: Run E2E API tests
|
||||
if: needs.detect-changes.outputs.api == 'true'
|
||||
run: bash tests/e2e/test_api.sh
|
||||
- name: Run notify-with-attachments E2E
|
||||
if: needs.detect-changes.outputs.api == 'true'
|
||||
run: bash tests/e2e/test_notify_attachments_e2e.sh
|
||||
- name: Run priority-runtimes E2E (claude-code + hermes — skips when keys absent)
|
||||
if: needs.detect-changes.outputs.api == 'true'
|
||||
run: bash tests/e2e/test_priority_runtimes_e2e.sh
|
||||
- name: Run poll-mode + since_id cursor E2E (#2339)
|
||||
if: needs.detect-changes.outputs.api == 'true'
|
||||
run: bash tests/e2e/test_poll_mode_e2e.sh
|
||||
- name: Run poll-mode chat upload E2E (RFC #2891)
|
||||
if: needs.detect-changes.outputs.api == 'true'
|
||||
run: bash tests/e2e/test_poll_mode_chat_upload_e2e.sh
|
||||
- name: Dump platform log on failure
|
||||
if: failure() && needs.detect-changes.outputs.api == 'true'
|
||||
run: cat workspace-server/platform.log || true
|
||||
- name: Stop platform
|
||||
if: always() && needs.detect-changes.outputs.api == 'true'
|
||||
run: |
|
||||
if [ -f workspace-server/platform.pid ]; then
|
||||
kill "$(cat workspace-server/platform.pid)" 2>/dev/null || true
|
||||
fi
|
||||
- name: Stop service containers
|
||||
# always() so containers don't leak when test steps fail. The
|
||||
# cleanup is best-effort: if the container is already gone
|
||||
# (e.g. concurrent rerun race), don't fail the job.
|
||||
if: always() && needs.detect-changes.outputs.api == 'true'
|
||||
run: |
|
||||
docker rm -f "$PG_CONTAINER" 2>/dev/null || true
|
||||
docker rm -f "$REDIS_CONTAINER" 2>/dev/null || true
|
||||
247
.gitea/workflows/e2e-staging-canvas.yml
Normal file
247
.gitea/workflows/e2e-staging-canvas.yml
Normal file
@ -0,0 +1,247 @@
|
||||
name: E2E Staging Canvas (Playwright)
|
||||
|
||||
# Ported from .github/workflows/e2e-staging-canvas.yml on 2026-05-11 per RFC
|
||||
# internal#219 §1 sweep. Differences from the GitHub version:
|
||||
# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them
|
||||
# per feedback_gitea_workflow_dispatch_inputs_unsupported).
|
||||
# - Dropped `merge_group:` (no Gitea merge queue).
|
||||
# - Dropped `environment:` blocks (Gitea has no environments).
|
||||
# - Workflow-level env.GITHUB_SERVER_URL pinned per
|
||||
# feedback_act_runner_github_server_url.
|
||||
# - `continue-on-error: true` on each job (RFC §1 contract).
|
||||
#
|
||||
|
||||
# Playwright test suite that provisions a fresh staging org per run and
|
||||
# verifies every workspace-panel tab renders without crashing. Complements
|
||||
# e2e-staging-saas.yml (which tests the API shape) by exercising the
|
||||
# actual browser + canvas bundle against live staging.
|
||||
#
|
||||
# Triggers: push to main/staging or PR touching canvas sources + this workflow,
|
||||
# manual dispatch, and weekly cron to catch browser/runtime drift even
|
||||
# when canvas is quiet.
|
||||
# Added staging to push/pull_request branches so the auto-promote gate
|
||||
# check (--event push --branch staging) can see a completed run for this
|
||||
# workflow — mirrors what PR #1891 does for e2e-api.yml.
|
||||
|
||||
on:
|
||||
# Trigger model (revised 2026-04-29):
|
||||
#
|
||||
# Always fires on push/pull_request; real work is gated per-step on
|
||||
# `needs.detect-changes.outputs.canvas`. When canvas/ paths haven't
|
||||
# changed, the no-op step alone runs and emits SUCCESS for the
|
||||
# `Canvas tabs E2E` check, satisfying branch protection without
|
||||
# spending CI cycles. See e2e-api.yml for the rationale on why this
|
||||
# is a single job rather than two-jobs-sharing-name.
|
||||
push:
|
||||
branches: [main]
|
||||
pull_request:
|
||||
branches: [main]
|
||||
schedule:
|
||||
# Weekly on Sunday 08:00 UTC — catches Chrome / Playwright / Next.js
|
||||
# release-note-shaped regressions that don't ride in with a PR.
|
||||
- cron: '0 8 * * 0'
|
||||
|
||||
concurrency:
|
||||
# Per-SHA grouping (changed 2026-04-28 from a single global group). The
|
||||
# global group made auto-promote-staging brittle: when a staging push
|
||||
# queued behind an in-flight run and a third entrant (a PR run, a
|
||||
# follow-on push) entered the group, the staging push got cancelled —
|
||||
# leaving auto-promote-staging looking at `completed/cancelled` for a
|
||||
# required gate and refusing to advance main. Observed 2026-04-28
|
||||
# 23:51-23:53 on staging tip 3f99fede.
|
||||
#
|
||||
# The original intent of the global group was to throttle parallel
|
||||
# E2E provisions (each spins a fresh EC2). At our scale that throttle
|
||||
# isn't worth the correctness cost — fresh-org-per-run isolates the
|
||||
# state, and the cost of two parallel runs (~$0.001/min × 10min × 2)
|
||||
# is rounding error vs. the cost of a stuck pipeline.
|
||||
#
|
||||
# Per-SHA still dedupes accidental double-triggers for the SAME SHA.
|
||||
# It does NOT cancel obsolete-PR-version runs on force-push; that
|
||||
# wasted CI is acceptable given the alternative is losing staging-tip
|
||||
# data that auto-promote-staging needs.
|
||||
group: e2e-staging-canvas-${{ github.event.pull_request.head.sha || github.sha }}
|
||||
cancel-in-progress: false
|
||||
|
||||
env:
|
||||
GITHUB_SERVER_URL: https://git.moleculesai.app
|
||||
|
||||
jobs:
|
||||
detect-changes:
|
||||
runs-on: ubuntu-latest
|
||||
# Phase 3 (RFC #219 §1): surface broken workflows without blocking.
|
||||
continue-on-error: true
|
||||
outputs:
|
||||
canvas: ${{ steps.decide.outputs.canvas }}
|
||||
steps:
|
||||
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- id: decide
|
||||
# Inline replacement for dorny/paths-filter — see e2e-api.yml.
|
||||
# Cron triggers always run real work (no diff context).
|
||||
run: |
|
||||
if [ "${{ github.event_name }}" = "schedule" ]; then
|
||||
echo "canvas=true" >> "$GITHUB_OUTPUT"
|
||||
exit 0
|
||||
fi
|
||||
BASE="${GITHUB_BASE_REF:-${{ github.event.before }}}"
|
||||
if [ "${{ github.event_name }}" = "pull_request" ] && [ -n "${{ github.event.pull_request.base.sha }}" ]; then
|
||||
BASE="${{ github.event.pull_request.base.sha }}"
|
||||
fi
|
||||
if [ -z "$BASE" ] || echo "$BASE" | grep -qE '^0+$'; then
|
||||
echo "canvas=true" >> "$GITHUB_OUTPUT"
|
||||
exit 0
|
||||
fi
|
||||
if ! git cat-file -e "$BASE" 2>/dev/null; then
|
||||
git fetch --depth=1 origin "$BASE" 2>/dev/null || true
|
||||
fi
|
||||
if ! git cat-file -e "$BASE" 2>/dev/null; then
|
||||
echo "canvas=true" >> "$GITHUB_OUTPUT"
|
||||
exit 0
|
||||
fi
|
||||
CHANGED=$(git diff --name-only "$BASE" HEAD)
|
||||
if echo "$CHANGED" | grep -qE '^(canvas/|\.gitea/workflows/e2e-staging-canvas\.yml$)'; then
|
||||
echo "canvas=true" >> "$GITHUB_OUTPUT"
|
||||
else
|
||||
echo "canvas=false" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
|
||||
# ONE job (no job-level `if:`) that always runs and reports under the
|
||||
# required-check name `Canvas tabs E2E`. Real work is gated per-step on
|
||||
# `needs.detect-changes.outputs.canvas`. See e2e-api.yml for the full
|
||||
# rationale — same path-filter check-name parity issue blocked PR #2264
|
||||
# (staging→main) on 2026-04-29 because branch protection treats matching-
|
||||
# name check runs as a SET, and any SKIPPED member fails the eval.
|
||||
playwright:
|
||||
needs: detect-changes
|
||||
name: Canvas tabs E2E
|
||||
runs-on: ubuntu-latest
|
||||
# Phase 3 (RFC #219 §1): surface broken workflows without blocking.
|
||||
continue-on-error: true
|
||||
timeout-minutes: 40
|
||||
|
||||
env:
|
||||
CANVAS_E2E_STAGING: '1'
|
||||
MOLECULE_CP_URL: https://staging-api.moleculesai.app
|
||||
MOLECULE_ADMIN_TOKEN: ${{ secrets.MOLECULE_STAGING_ADMIN_TOKEN }}
|
||||
|
||||
defaults:
|
||||
run:
|
||||
working-directory: canvas
|
||||
|
||||
steps:
|
||||
- name: No-op pass (paths filter excluded this commit)
|
||||
if: needs.detect-changes.outputs.canvas != 'true'
|
||||
working-directory: .
|
||||
run: |
|
||||
echo "No canvas / workflow changes — E2E Staging Canvas gate satisfied without running tests."
|
||||
echo "::notice::E2E Staging Canvas no-op pass (paths filter excluded this commit)."
|
||||
|
||||
- if: needs.detect-changes.outputs.canvas == 'true'
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
|
||||
- name: Verify admin token present
|
||||
if: needs.detect-changes.outputs.canvas == 'true'
|
||||
run: |
|
||||
if [ -z "$MOLECULE_ADMIN_TOKEN" ]; then
|
||||
echo "::error::Missing MOLECULE_STAGING_ADMIN_TOKEN"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
- name: Set up Node
|
||||
if: needs.detect-changes.outputs.canvas == 'true'
|
||||
uses: actions/setup-node@48b55a011bda9f5d6aeb4c2d9c7362e8dae4041e # v6.4.0
|
||||
with:
|
||||
node-version: '20'
|
||||
cache: 'npm'
|
||||
cache-dependency-path: canvas/package-lock.json
|
||||
|
||||
- name: Install canvas deps
|
||||
if: needs.detect-changes.outputs.canvas == 'true'
|
||||
run: npm ci
|
||||
|
||||
- name: Install Playwright browsers
|
||||
if: needs.detect-changes.outputs.canvas == 'true'
|
||||
run: npx playwright install --with-deps chromium
|
||||
|
||||
- name: Run staging canvas E2E
|
||||
if: needs.detect-changes.outputs.canvas == 'true'
|
||||
run: npx playwright test --config=playwright.staging.config.ts
|
||||
|
||||
- name: Upload Playwright report on failure
|
||||
if: failure() && needs.detect-changes.outputs.canvas == 'true'
|
||||
# Pinned to v3 for Gitea act_runner v0.6 compatibility — v4+ uses
|
||||
# the GHES 3.10+ artifact protocol that Gitea 1.22.x does NOT
|
||||
# implement (see ci.yml upload step for the canonical error
|
||||
# cite). Drop this pin when Gitea ships the v4 protocol.
|
||||
uses: actions/upload-artifact@c6a366c94c3e0affe28c06c8df20a878f24da3cf # v3.2.2
|
||||
with:
|
||||
name: playwright-report-staging
|
||||
path: canvas/playwright-report-staging/
|
||||
retention-days: 14
|
||||
|
||||
- name: Upload screenshots on failure
|
||||
if: failure() && needs.detect-changes.outputs.canvas == 'true'
|
||||
# Pinned to v3 for Gitea act_runner v0.6 compatibility (see above).
|
||||
uses: actions/upload-artifact@c6a366c94c3e0affe28c06c8df20a878f24da3cf # v3.2.2
|
||||
with:
|
||||
name: playwright-screenshots
|
||||
path: canvas/test-results/
|
||||
retention-days: 14
|
||||
|
||||
# Safety-net teardown — fires only when Playwright's globalTeardown
|
||||
# didn't (worker crash, runner cancel). Reads the slug from
|
||||
# canvas/.playwright-staging-state.json (written by staging-setup
|
||||
# as its first action, before any CP call) and deletes only that
|
||||
# slug.
|
||||
#
|
||||
# Earlier versions of this step pattern-swept `e2e-canvas-<today>-*`
|
||||
# orgs to compensate for setup-crash-before-state-file-write. That
|
||||
# over-aggressive cleanup raced concurrent canvas-E2E runs and
|
||||
# poisoned each other's tenants — observed 2026-04-30 when three
|
||||
# real-test runs killed each other mid-test, surfacing as
|
||||
# `getaddrinfo ENOTFOUND` once CP had cleaned up the just-deleted
|
||||
# DNS record. Pattern-sweep removed; setup now writes the state
|
||||
# file before any CP work, so the slug is always recoverable.
|
||||
- name: Teardown safety net
|
||||
if: always() && needs.detect-changes.outputs.canvas == 'true'
|
||||
env:
|
||||
ADMIN_TOKEN: ${{ secrets.MOLECULE_STAGING_ADMIN_TOKEN }}
|
||||
run: |
|
||||
set +e
|
||||
STATE_FILE=".playwright-staging-state.json"
|
||||
if [ ! -f "$STATE_FILE" ]; then
|
||||
echo "::notice::No state file at canvas/$STATE_FILE — Playwright globalTeardown handled it (or setup never ran)."
|
||||
exit 0
|
||||
fi
|
||||
slug=$(python3 -c "import json; print(json.load(open('$STATE_FILE')).get('slug',''))")
|
||||
if [ -z "$slug" ]; then
|
||||
echo "::warning::State file present but slug missing; nothing to clean up."
|
||||
exit 0
|
||||
fi
|
||||
echo "Deleting orphan tenant: $slug"
|
||||
# Verify HTTP 2xx instead of `>/dev/null || true` swallowing
|
||||
# failures. A 5xx or timeout previously looked identical to
|
||||
# success, leaving the tenant alive for up to ~45 min until
|
||||
# sweep-stale-e2e-orgs caught it. Surface failures as
|
||||
# workflow warnings naming the slug. Don't `exit 1` — a single
|
||||
# cleanup miss shouldn't fail-flag the canvas test when the
|
||||
# actual smoke check passed; the sweeper is the safety net.
|
||||
# See molecule-controlplane#420.
|
||||
# Tempfile-routed -w + set +e/-e prevents curl-exit-code
|
||||
# pollution of the captured status (lint-curl-status-capture.yml).
|
||||
set +e
|
||||
curl -sS -o /tmp/canvas-cleanup.out -w "%{http_code}" \
|
||||
-X DELETE "$MOLECULE_CP_URL/cp/admin/tenants/$slug" \
|
||||
-H "Authorization: Bearer $ADMIN_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{\"confirm\":\"$slug\"}" >/tmp/canvas-cleanup.code
|
||||
set -e
|
||||
code=$(cat /tmp/canvas-cleanup.code 2>/dev/null || echo "000")
|
||||
if [ "$code" = "200" ] || [ "$code" = "204" ]; then
|
||||
echo "[teardown] deleted $slug (HTTP $code)"
|
||||
else
|
||||
echo "::warning::canvas teardown for $slug returned HTTP $code — sweep-stale-e2e-orgs will catch it within ~45 min. Body: $(head -c 300 /tmp/canvas-cleanup.out 2>/dev/null)"
|
||||
fi
|
||||
exit 0
|
||||
189
.gitea/workflows/e2e-staging-external.yml
Normal file
189
.gitea/workflows/e2e-staging-external.yml
Normal file
@ -0,0 +1,189 @@
|
||||
name: E2E Staging External Runtime
|
||||
|
||||
# Ported from .github/workflows/e2e-staging-external.yml on 2026-05-11 per RFC
|
||||
# internal#219 §1 sweep. Differences from the GitHub version:
|
||||
# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them
|
||||
# per feedback_gitea_workflow_dispatch_inputs_unsupported).
|
||||
# - Dropped `merge_group:` (no Gitea merge queue).
|
||||
# - Dropped `environment:` blocks (Gitea has no environments).
|
||||
# - Workflow-level env.GITHUB_SERVER_URL pinned per
|
||||
# feedback_act_runner_github_server_url.
|
||||
# - `continue-on-error: true` on each job (RFC §1 contract).
|
||||
#
|
||||
|
||||
# Regression for the four/five workspaces.status=awaiting_agent transitions
|
||||
# that silently failed in production for five days before migration 046
|
||||
# extended the workspace_status enum (see
|
||||
# workspace-server/migrations/046_workspace_status_awaiting_agent.up.sql).
|
||||
#
|
||||
# Why this is its own workflow (not folded into e2e-staging-saas.yml):
|
||||
# - The full-saas harness defaults to runtime=hermes, never exercises
|
||||
# external-runtime. Adding an `external` parameter to that script
|
||||
# would force every push to staging through both lifecycles in
|
||||
# series, doubling the EC2 cold-start budget.
|
||||
# - The external lifecycle has unique timing (REMOTE_LIVENESS_STALE_AFTER
|
||||
# window, 90s default + sweep interval), which we wait through
|
||||
# deliberately. Folding it into hermes would make the long path
|
||||
# even longer.
|
||||
# - It can run in parallel with the hermes E2E since both create
|
||||
# fresh tenant orgs with distinct slug prefixes (`e2e-ext-...` vs
|
||||
# `e2e-...`).
|
||||
#
|
||||
# Triggers:
|
||||
# - Push to staging when any source affecting external runtime,
|
||||
# hibernation, or the migration set changes.
|
||||
# - PR review for the same set.
|
||||
# - Manual workflow_dispatch.
|
||||
# - Daily cron at 07:30 UTC (catches drift on quiet days; staggered
|
||||
# 30 min after e2e-staging-saas.yml's 07:00 UTC cron).
|
||||
#
|
||||
# Concurrency: serialized so two staging pushes don't fight for the
|
||||
# same EC2 quota window. cancel-in-progress=false so a half-rolled
|
||||
# tenant always finishes its teardown.
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main]
|
||||
paths:
|
||||
- 'workspace-server/internal/handlers/workspace.go'
|
||||
- 'workspace-server/internal/handlers/registry.go'
|
||||
- 'workspace-server/internal/handlers/workspace_restart.go'
|
||||
- 'workspace-server/internal/registry/healthsweep.go'
|
||||
- 'workspace-server/internal/registry/liveness.go'
|
||||
- 'workspace-server/migrations/**'
|
||||
- 'workspace-server/internal/db/workspace_status_enum_drift_test.go'
|
||||
- 'tests/e2e/test_staging_external_runtime.sh'
|
||||
- '.gitea/workflows/e2e-staging-external.yml'
|
||||
pull_request:
|
||||
branches: [main]
|
||||
paths:
|
||||
- 'workspace-server/internal/handlers/workspace.go'
|
||||
- 'workspace-server/internal/handlers/registry.go'
|
||||
- 'workspace-server/internal/handlers/workspace_restart.go'
|
||||
- 'workspace-server/internal/registry/healthsweep.go'
|
||||
- 'workspace-server/internal/registry/liveness.go'
|
||||
- 'workspace-server/migrations/**'
|
||||
- 'workspace-server/internal/db/workspace_status_enum_drift_test.go'
|
||||
- 'tests/e2e/test_staging_external_runtime.sh'
|
||||
- '.gitea/workflows/e2e-staging-external.yml'
|
||||
schedule:
|
||||
- cron: '30 7 * * *'
|
||||
|
||||
concurrency:
|
||||
group: e2e-staging-external
|
||||
cancel-in-progress: false
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
env:
|
||||
GITHUB_SERVER_URL: https://git.moleculesai.app
|
||||
|
||||
jobs:
|
||||
e2e-staging-external:
|
||||
name: E2E Staging External Runtime
|
||||
runs-on: ubuntu-latest
|
||||
# Phase 3 (RFC #219 §1): surface broken workflows without blocking.
|
||||
continue-on-error: true
|
||||
timeout-minutes: 25
|
||||
|
||||
env:
|
||||
MOLECULE_CP_URL: https://staging-api.moleculesai.app
|
||||
MOLECULE_ADMIN_TOKEN: ${{ secrets.MOLECULE_STAGING_ADMIN_TOKEN }}
|
||||
E2E_RUN_ID: "${{ github.run_id }}-${{ github.run_attempt }}"
|
||||
E2E_KEEP_ORG: ${{ github.event.inputs.keep_org && '1' || '0' }}
|
||||
E2E_STALE_WAIT_SECS: ${{ github.event.inputs.stale_wait_secs || '180' }}
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
|
||||
- name: Verify admin token present
|
||||
run: |
|
||||
if [ -z "$MOLECULE_ADMIN_TOKEN" ]; then
|
||||
# Schedule + push triggers must hard-fail when the token is
|
||||
# missing — silent skip would mask infra rot. Manual dispatch
|
||||
# gets the same hard-fail; an operator running this on a fork
|
||||
# without secrets configured needs to know up-front.
|
||||
echo "::error::MOLECULE_STAGING_ADMIN_TOKEN secret not set (Railway staging CP_ADMIN_API_TOKEN)"
|
||||
exit 2
|
||||
fi
|
||||
echo "Admin token present ✓"
|
||||
|
||||
- name: CP staging health preflight
|
||||
run: |
|
||||
code=$(curl -sS -o /dev/null -w "%{http_code}" --max-time 10 "$MOLECULE_CP_URL/health")
|
||||
if [ "$code" != "200" ]; then
|
||||
echo "::error::Staging CP unhealthy (got HTTP $code). Skipping — not a workspace bug."
|
||||
exit 1
|
||||
fi
|
||||
echo "Staging CP healthy ✓"
|
||||
|
||||
- name: Run external-runtime E2E
|
||||
id: e2e
|
||||
run: bash tests/e2e/test_staging_external_runtime.sh
|
||||
|
||||
# Mirror the e2e-staging-saas.yml safety net: if the runner is
|
||||
# cancelled (e.g. concurrent staging push), the test script's
|
||||
# EXIT trap may not fire, so we sweep e2e-ext-* slugs scoped to
|
||||
# *this* run id.
|
||||
- name: Teardown safety net (runs on cancel/failure)
|
||||
if: always()
|
||||
env:
|
||||
ADMIN_TOKEN: ${{ secrets.MOLECULE_STAGING_ADMIN_TOKEN }}
|
||||
run: |
|
||||
set +e
|
||||
orgs=$(curl -sS "$MOLECULE_CP_URL/cp/admin/orgs" \
|
||||
-H "Authorization: Bearer $ADMIN_TOKEN" 2>/dev/null \
|
||||
| python3 -c "
|
||||
import json, sys, os, datetime
|
||||
run_id = os.environ.get('GITHUB_RUN_ID', '')
|
||||
d = json.load(sys.stdin)
|
||||
# Scope STRICTLY to this run id (e2e-ext-YYYYMMDD-<runid>-...)
|
||||
# so concurrent runs and unrelated dev probes are not touched.
|
||||
# Sweep today AND yesterday so a midnight-crossing run still
|
||||
# cleans up its own slug.
|
||||
today = datetime.date.today()
|
||||
yesterday = today - datetime.timedelta(days=1)
|
||||
dates = (today.strftime('%Y%m%d'), yesterday.strftime('%Y%m%d'))
|
||||
if not run_id:
|
||||
# Without a run id we cannot scope safely; bail rather
|
||||
# than risk deleting unrelated tenants.
|
||||
sys.exit(0)
|
||||
prefixes = tuple(f'e2e-ext-{d}-{run_id}-' for d in dates)
|
||||
for o in d.get('orgs', []):
|
||||
s = o.get('slug', '')
|
||||
if s.startswith(prefixes) and o.get('status') != 'purged':
|
||||
print(s)
|
||||
" 2>/dev/null)
|
||||
if [ -n "$orgs" ]; then
|
||||
echo "Safety-net sweep: deleting leftover orgs:"
|
||||
echo "$orgs"
|
||||
# Per-slug verified DELETE — see molecule-controlplane#420.
|
||||
# `>/dev/null 2>&1` previously hid every failure; surface
|
||||
# non-2xx as workflow warnings so the run page names what
|
||||
# leaked. Sweeper catches the rest within ~45 min.
|
||||
leaks=()
|
||||
for slug in $orgs; do
|
||||
# Tempfile-routed -w + set +e/-e prevents curl-exit-code
|
||||
# pollution of the captured status (lint-curl-status-capture.yml).
|
||||
set +e
|
||||
curl -sS -o /tmp/external-cleanup.out -w "%{http_code}" \
|
||||
-X DELETE "$MOLECULE_CP_URL/cp/admin/tenants/$slug" \
|
||||
-H "Authorization: Bearer $ADMIN_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{\"confirm\":\"$slug\"}" >/tmp/external-cleanup.code
|
||||
set -e
|
||||
code=$(cat /tmp/external-cleanup.code 2>/dev/null || echo "000")
|
||||
if [ "$code" = "200" ] || [ "$code" = "204" ]; then
|
||||
echo "[teardown] deleted $slug (HTTP $code)"
|
||||
else
|
||||
echo "::warning::external teardown for $slug returned HTTP $code — sweep-stale-e2e-orgs will catch it within ~45 min. Body: $(head -c 300 /tmp/external-cleanup.out 2>/dev/null)"
|
||||
leaks+=("$slug")
|
||||
fi
|
||||
done
|
||||
if [ ${#leaks[@]} -gt 0 ]; then
|
||||
echo "::warning::external teardown left ${#leaks[@]} leak(s): ${leaks[*]}"
|
||||
fi
|
||||
else
|
||||
echo "Safety-net sweep: no leftover orgs to clean."
|
||||
fi
|
||||
251
.gitea/workflows/e2e-staging-saas.yml
Normal file
251
.gitea/workflows/e2e-staging-saas.yml
Normal file
@ -0,0 +1,251 @@
|
||||
name: E2E Staging SaaS (full lifecycle)
|
||||
|
||||
# Ported from .github/workflows/e2e-staging-saas.yml on 2026-05-11 per RFC
|
||||
# internal#219 §1 sweep. Differences from the GitHub version:
|
||||
# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them
|
||||
# per feedback_gitea_workflow_dispatch_inputs_unsupported).
|
||||
# - Dropped `merge_group:` (no Gitea merge queue).
|
||||
# - Dropped `environment:` blocks (Gitea has no environments).
|
||||
# - Workflow-level env.GITHUB_SERVER_URL pinned per
|
||||
# feedback_act_runner_github_server_url.
|
||||
# - `continue-on-error: true` on each job (RFC §1 contract).
|
||||
#
|
||||
|
||||
# Dedicated workflow that provisions a fresh staging org per run, exercises
|
||||
# the full workspace lifecycle (register → heartbeat → A2A → delegation →
|
||||
# HMA memory → activity → peers), then tears down and asserts leak-free.
|
||||
#
|
||||
# Why a separate workflow (not folded into ci.yml):
|
||||
# - The run takes ~25-35 min (EC2 boot + cloudflared DNS + provision sweeps +
|
||||
# agent bootstrap), way too slow for every PR.
|
||||
# - Needs its own concurrency group so two pushes don't fight over the
|
||||
# same staging org slug prefix.
|
||||
# - Has its own required secrets (session cookie, admin token) that most
|
||||
# PRs don't need to read.
|
||||
#
|
||||
# Triggers:
|
||||
# - Push to main (regression guard)
|
||||
# - workflow_dispatch (manual re-run from UI)
|
||||
# - Nightly cron (catches drift even when no pushes land)
|
||||
# - Changes to any provisioning-critical file under PR review (opt-in
|
||||
# via the same paths watcher that e2e-api.yml uses)
|
||||
|
||||
on:
|
||||
# Trunk-based (Phase 3 of internal#81): main is the only branch.
|
||||
# Previously this fired on staging push too because staging was a
|
||||
# superset of main and ran the gate ahead of auto-promote; with no
|
||||
# staging branch, main is where E2E gates the deploy.
|
||||
push:
|
||||
branches: [main]
|
||||
paths:
|
||||
- 'workspace-server/internal/handlers/registry.go'
|
||||
- 'workspace-server/internal/handlers/workspace_provision.go'
|
||||
- 'workspace-server/internal/handlers/a2a_proxy.go'
|
||||
- 'workspace-server/internal/middleware/**'
|
||||
- 'workspace-server/internal/provisioner/**'
|
||||
- 'tests/e2e/test_staging_full_saas.sh'
|
||||
- '.gitea/workflows/e2e-staging-saas.yml'
|
||||
pull_request:
|
||||
branches: [main]
|
||||
paths:
|
||||
- 'workspace-server/internal/handlers/registry.go'
|
||||
- 'workspace-server/internal/handlers/workspace_provision.go'
|
||||
- 'workspace-server/internal/handlers/a2a_proxy.go'
|
||||
- 'workspace-server/internal/middleware/**'
|
||||
- 'workspace-server/internal/provisioner/**'
|
||||
- 'tests/e2e/test_staging_full_saas.sh'
|
||||
- '.gitea/workflows/e2e-staging-saas.yml'
|
||||
schedule:
|
||||
# 07:00 UTC every day — catches AMI drift, WorkOS cert rotation,
|
||||
# Cloudflare API regressions, etc. even on quiet days.
|
||||
- cron: '0 7 * * *'
|
||||
|
||||
# Serialize: staging has a finite per-hour org creation quota. Two pushes
|
||||
# landing in quick succession should queue, not race. `cancel-in-progress:
|
||||
# false` mirrors e2e-api.yml — GitHub would otherwise cancel the running
|
||||
# teardown step and leave orphan EC2s.
|
||||
concurrency:
|
||||
group: e2e-staging-saas
|
||||
cancel-in-progress: false
|
||||
|
||||
env:
|
||||
GITHUB_SERVER_URL: https://git.moleculesai.app
|
||||
|
||||
jobs:
|
||||
e2e-staging-saas:
|
||||
name: E2E Staging SaaS
|
||||
runs-on: ubuntu-latest
|
||||
# Phase 3 (RFC #219 §1): surface broken workflows without blocking.
|
||||
continue-on-error: true
|
||||
timeout-minutes: 45
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
env:
|
||||
MOLECULE_CP_URL: https://staging-api.moleculesai.app
|
||||
# Single admin-bearer secret drives provision + tenant-token
|
||||
# retrieval + teardown. Configure in
|
||||
# Settings → Secrets and variables → Actions → Repository secrets.
|
||||
MOLECULE_ADMIN_TOKEN: ${{ secrets.MOLECULE_STAGING_ADMIN_TOKEN }}
|
||||
# MiniMax is the PRIMARY LLM auth path post-2026-05-04. Switched
|
||||
# from hermes+OpenAI default after #2578 (the staging OpenAI key
|
||||
# account went over quota and stayed dead for 36+ hours, taking
|
||||
# the full-lifecycle E2E red on every provisioning-critical push).
|
||||
# claude-code template's `minimax` provider routes
|
||||
# ANTHROPIC_BASE_URL to api.minimax.io/anthropic and reads
|
||||
# MINIMAX_API_KEY at boot — separate billing account so an
|
||||
# OpenAI quota collapse no longer wedges the gate. Mirrors the
|
||||
# canary-staging.yml + continuous-synth-e2e.yml migrations.
|
||||
E2E_MINIMAX_API_KEY: ${{ secrets.MOLECULE_STAGING_MINIMAX_API_KEY }}
|
||||
# Direct-Anthropic alternative for operators who don't want to
|
||||
# set up a MiniMax account (priority below MiniMax — first
|
||||
# non-empty wins in test_staging_full_saas.sh's secrets-injection
|
||||
# block). See #2578 PR comment for the rationale.
|
||||
E2E_ANTHROPIC_API_KEY: ${{ secrets.MOLECULE_STAGING_ANTHROPIC_API_KEY }}
|
||||
# OpenAI fallback — kept wired so an operator-dispatched run with
|
||||
# E2E_RUNTIME=hermes or =langgraph via workflow_dispatch can still
|
||||
# exercise the OpenAI path.
|
||||
E2E_OPENAI_API_KEY: ${{ secrets.MOLECULE_STAGING_OPENAI_KEY }}
|
||||
E2E_RUNTIME: ${{ github.event.inputs.runtime || 'claude-code' }}
|
||||
# Pin the model when running on the default claude-code path —
|
||||
# the per-runtime default ("sonnet") routes to direct Anthropic
|
||||
# and defeats the cost saving. Operators can override via the
|
||||
# workflow_dispatch flow (no input wired here yet — runtime
|
||||
# override is enough for ad-hoc).
|
||||
E2E_MODEL_SLUG: ${{ github.event.inputs.runtime == 'hermes' && 'openai/gpt-4o' || github.event.inputs.runtime == 'langgraph' && 'openai:gpt-4o' || 'MiniMax-M2.7-highspeed' }}
|
||||
E2E_RUN_ID: "${{ github.run_id }}-${{ github.run_attempt }}"
|
||||
E2E_KEEP_ORG: ${{ github.event.inputs.keep_org && '1' || '0' }}
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
|
||||
- name: Verify admin token present
|
||||
run: |
|
||||
if [ -z "$MOLECULE_ADMIN_TOKEN" ]; then
|
||||
echo "::error::MOLECULE_STAGING_ADMIN_TOKEN secret not set (Railway staging CP_ADMIN_API_TOKEN)"
|
||||
exit 2
|
||||
fi
|
||||
echo "Admin token present ✓"
|
||||
|
||||
- name: Verify LLM key present
|
||||
run: |
|
||||
# Per-runtime key check — claude-code uses MiniMax; hermes /
|
||||
# langgraph (operator-dispatched only) use OpenAI. Hard-fail
|
||||
# rather than soft-skip per #2578's lesson — empty key
|
||||
# silently falls through to the wrong SECRETS_JSON branch and
|
||||
# produces a confusing auth error 5 min later instead of the
|
||||
# clean "secret missing" message at the top.
|
||||
case "${E2E_RUNTIME}" in
|
||||
claude-code)
|
||||
# Either MiniMax OR direct-Anthropic works — first
|
||||
# non-empty wins in the test script's secrets-injection
|
||||
# priority chain.
|
||||
if [ -n "${E2E_MINIMAX_API_KEY:-}" ]; then
|
||||
required_secret_name="MOLECULE_STAGING_MINIMAX_API_KEY"
|
||||
required_secret_value="${E2E_MINIMAX_API_KEY}"
|
||||
elif [ -n "${E2E_ANTHROPIC_API_KEY:-}" ]; then
|
||||
required_secret_name="MOLECULE_STAGING_ANTHROPIC_API_KEY"
|
||||
required_secret_value="${E2E_ANTHROPIC_API_KEY}"
|
||||
else
|
||||
required_secret_name="MOLECULE_STAGING_MINIMAX_API_KEY or MOLECULE_STAGING_ANTHROPIC_API_KEY"
|
||||
required_secret_value=""
|
||||
fi
|
||||
;;
|
||||
langgraph|hermes)
|
||||
required_secret_name="MOLECULE_STAGING_OPENAI_KEY"
|
||||
required_secret_value="${E2E_OPENAI_API_KEY:-}"
|
||||
;;
|
||||
*)
|
||||
echo "::warning::Unknown E2E_RUNTIME='${E2E_RUNTIME}' — skipping LLM-key check"
|
||||
required_secret_name=""
|
||||
required_secret_value="present"
|
||||
;;
|
||||
esac
|
||||
if [ -n "$required_secret_name" ] && [ -z "$required_secret_value" ]; then
|
||||
echo "::error::${required_secret_name} secret not set for runtime=${E2E_RUNTIME} — workspaces will fail at boot with 'No provider API key found'"
|
||||
exit 2
|
||||
fi
|
||||
echo "LLM key present ✓ (runtime=${E2E_RUNTIME}, key=${required_secret_name}, len=${#required_secret_value})"
|
||||
|
||||
- name: CP staging health preflight
|
||||
run: |
|
||||
code=$(curl -sS -o /dev/null -w "%{http_code}" --max-time 10 "$MOLECULE_CP_URL/health")
|
||||
if [ "$code" != "200" ]; then
|
||||
echo "::error::Staging CP unhealthy (got HTTP $code). Skipping — not a workspace bug."
|
||||
exit 1
|
||||
fi
|
||||
echo "Staging CP healthy ✓"
|
||||
|
||||
- name: Run full-lifecycle E2E
|
||||
id: e2e
|
||||
run: bash tests/e2e/test_staging_full_saas.sh
|
||||
|
||||
# Belt-and-braces teardown: the test script itself installs a trap
|
||||
# for EXIT/INT/TERM, but if the GH runner itself is cancelled (e.g.
|
||||
# someone pushes a new commit and workflow concurrency is set to
|
||||
# cancel), the trap may not fire. This `always()` step runs even on
|
||||
# cancellation and attempts the delete a second time. The admin
|
||||
# DELETE endpoint is idempotent so double-invoking is safe.
|
||||
- name: Teardown safety net (runs on cancel/failure)
|
||||
if: always()
|
||||
env:
|
||||
ADMIN_TOKEN: ${{ secrets.MOLECULE_STAGING_ADMIN_TOKEN }}
|
||||
run: |
|
||||
# Best-effort: find any e2e-YYYYMMDD-* orgs matching this run and
|
||||
# nuke them. Catches the case where the script died before
|
||||
# exporting its slug.
|
||||
set +e
|
||||
orgs=$(curl -sS "$MOLECULE_CP_URL/cp/admin/orgs" \
|
||||
-H "Authorization: Bearer $ADMIN_TOKEN" 2>/dev/null \
|
||||
| python3 -c "
|
||||
import json, sys, os, datetime
|
||||
run_id = os.environ.get('GITHUB_RUN_ID', '')
|
||||
d = json.load(sys.stdin)
|
||||
# ONLY sweep slugs from *this* CI run. Previously the filter was
|
||||
# f'e2e-{today}-' which stomped on parallel CI runs AND any manual
|
||||
# E2E probes a dev was running against staging (incident 2026-04-21
|
||||
# 15:02Z: this workflow's safety net deleted an unrelated manual
|
||||
# run's tenant 1s after it hit 'running').
|
||||
# Sweep both today AND yesterday's UTC dates so a run that crosses
|
||||
# midnight still matches its own slug — see the 2026-04-26→27
|
||||
# canvas-safety-net incident for the same bug class.
|
||||
today = datetime.date.today()
|
||||
yesterday = today - datetime.timedelta(days=1)
|
||||
dates = (today.strftime('%Y%m%d'), yesterday.strftime('%Y%m%d'))
|
||||
if run_id:
|
||||
prefixes = tuple(f'e2e-{d}-{run_id}-' for d in dates)
|
||||
else:
|
||||
prefixes = tuple(f'e2e-{d}-' for d in dates)
|
||||
candidates = [o['slug'] for o in d.get('orgs', [])
|
||||
if any(o.get('slug','').startswith(p) for p in prefixes)
|
||||
and o.get('instance_status') not in ('purged',)]
|
||||
print('\n'.join(candidates))
|
||||
" 2>/dev/null)
|
||||
# Per-slug verified DELETE (was `>/dev/null || true` — see
|
||||
# molecule-controlplane#420). Surface non-2xx as a workflow
|
||||
# warning naming the leaked slug; don't exit 1 (sweeper is
|
||||
# the safety net within ~45 min).
|
||||
leaks=()
|
||||
for slug in $orgs; do
|
||||
echo "Safety-net teardown: $slug"
|
||||
# Tempfile-routed -w + set +e/-e prevents curl-exit-code
|
||||
# pollution of the captured status (lint-curl-status-capture.yml).
|
||||
set +e
|
||||
curl -sS -o /tmp/saas-cleanup.out -w "%{http_code}" \
|
||||
-X DELETE "$MOLECULE_CP_URL/cp/admin/tenants/$slug" \
|
||||
-H "Authorization: Bearer $ADMIN_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{\"confirm\":\"$slug\"}" >/tmp/saas-cleanup.code
|
||||
set -e
|
||||
code=$(cat /tmp/saas-cleanup.code 2>/dev/null || echo "000")
|
||||
if [ "$code" = "200" ] || [ "$code" = "204" ]; then
|
||||
echo "[teardown] deleted $slug (HTTP $code)"
|
||||
else
|
||||
echo "::warning::saas teardown for $slug returned HTTP $code — sweep-stale-e2e-orgs will catch it within ~45 min. Body: $(head -c 300 /tmp/saas-cleanup.out 2>/dev/null)"
|
||||
leaks+=("$slug")
|
||||
fi
|
||||
done
|
||||
if [ ${#leaks[@]} -gt 0 ]; then
|
||||
echo "::warning::saas teardown left ${#leaks[@]} leak(s): ${leaks[*]}"
|
||||
fi
|
||||
exit 0
|
||||
157
.gitea/workflows/e2e-staging-sanity.yml
Normal file
157
.gitea/workflows/e2e-staging-sanity.yml
Normal file
@ -0,0 +1,157 @@
|
||||
name: E2E Staging Sanity (leak-detection self-check)
|
||||
|
||||
# Ported from .github/workflows/e2e-staging-sanity.yml on 2026-05-11 per
|
||||
# RFC internal#219 §1 sweep.
|
||||
#
|
||||
# Differences from the GitHub version:
|
||||
# - Dropped `workflow_dispatch:` (Gitea 1.22.6 finicky on bare dispatch).
|
||||
# - `actions/github-script@v9` issue-open block replaced with curl
|
||||
# calls to the Gitea REST API (/api/v1/repos/.../issues|comments).
|
||||
# - Workflow-level env.GITHUB_SERVER_URL set.
|
||||
# - `continue-on-error: true` on the job (RFC §1 contract).
|
||||
#
|
||||
# Periodic assertion that the teardown safety nets in e2e-staging-saas
|
||||
# and canary-staging actually work. Runs the E2E harness with
|
||||
# E2E_INTENTIONAL_FAILURE=1, which poisons the tenant admin token after
|
||||
# the org is provisioned. The workspace-provision step then fails, the
|
||||
# script exits non-zero, and the EXIT trap + workflow always()-step
|
||||
# must still tear down cleanly.
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: '0 6 * * 1'
|
||||
|
||||
env:
|
||||
GITHUB_SERVER_URL: https://git.moleculesai.app
|
||||
|
||||
concurrency:
|
||||
group: e2e-staging-sanity
|
||||
cancel-in-progress: false
|
||||
|
||||
permissions:
|
||||
issues: write
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
sanity:
|
||||
name: Intentional-failure teardown sanity
|
||||
runs-on: ubuntu-latest
|
||||
# Phase 3 (RFC #219 §1): surface broken workflows without blocking.
|
||||
continue-on-error: true
|
||||
timeout-minutes: 20
|
||||
|
||||
env:
|
||||
MOLECULE_CP_URL: https://staging-api.moleculesai.app
|
||||
MOLECULE_ADMIN_TOKEN: ${{ secrets.MOLECULE_STAGING_ADMIN_TOKEN }}
|
||||
E2E_MODE: canary
|
||||
E2E_RUNTIME: hermes
|
||||
E2E_RUN_ID: "sanity-${{ github.run_id }}"
|
||||
E2E_INTENTIONAL_FAILURE: "1"
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
|
||||
- name: Verify admin token present
|
||||
run: |
|
||||
if [ -z "$MOLECULE_ADMIN_TOKEN" ]; then
|
||||
echo "::error::MOLECULE_STAGING_ADMIN_TOKEN not set"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
# Inverted assertion: the run MUST fail. If it passes, the
|
||||
# E2E_INTENTIONAL_FAILURE path is broken.
|
||||
- name: Run harness — expecting exit !=0
|
||||
id: harness
|
||||
run: |
|
||||
set +e
|
||||
bash tests/e2e/test_staging_full_saas.sh
|
||||
rc=$?
|
||||
echo "harness_rc=$rc" >> "$GITHUB_OUTPUT"
|
||||
if [ "$rc" = "1" ]; then
|
||||
echo "OK Harness failed as expected (rc=1); teardown trap ran, leak-check passed"
|
||||
exit 0
|
||||
elif [ "$rc" = "0" ]; then
|
||||
echo "::error::Harness succeeded under E2E_INTENTIONAL_FAILURE=1 — the poisoning path is broken"
|
||||
exit 1
|
||||
elif [ "$rc" = "4" ]; then
|
||||
echo "::error::LEAK DETECTED (rc=4) — teardown failed to clean up the org. Safety net broken."
|
||||
exit 4
|
||||
else
|
||||
echo "::error::Unexpected rc=$rc — neither clean-failure nor leak. Investigate harness."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Open issue if safety net is broken (Gitea API)
|
||||
if: failure()
|
||||
env:
|
||||
GITEA_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
REPO: ${{ github.repository }}
|
||||
SERVER_URL: ${{ env.GITHUB_SERVER_URL }}
|
||||
RUN_ID: ${{ github.run_id }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
API="${SERVER_URL%/}/api/v1"
|
||||
TITLE="E2E teardown safety net broken"
|
||||
RUN_URL="${SERVER_URL}/${REPO}/actions/runs/${RUN_ID}"
|
||||
|
||||
BODY_JSON=$(jq -nc --arg t "$TITLE" --arg run "$RUN_URL" '
|
||||
{title: $t,
|
||||
body: ("The weekly sanity run (E2E_INTENTIONAL_FAILURE=1) did not exit as expected. This means one of:\n - poisoning did not actually cause failure (test harness regression), OR\n - teardown left an orphan org (leak detection caught a real bug)\n\nRun: " + $run + "\n\nThis is higher priority than a canary failure — the whole E2E safety net cannot be trusted until this is resolved.")}')
|
||||
|
||||
EXISTING=$(curl -fsS -H "Authorization: token $GITEA_TOKEN" \
|
||||
"${API}/repos/${REPO}/issues?state=open&type=issues&limit=50" \
|
||||
| jq -r --arg t "$TITLE" '.[] | select(.title==$t) | .number' | head -1)
|
||||
|
||||
if [ -n "$EXISTING" ]; then
|
||||
curl -fsS -X POST -H "Authorization: token $GITEA_TOKEN" -H "Content-Type: application/json" \
|
||||
"${API}/repos/${REPO}/issues/${EXISTING}/comments" \
|
||||
-d "$(jq -nc --arg run "$RUN_URL" '{body: ("Still broken. " + $run)}')" >/dev/null
|
||||
echo "Commented on existing issue #${EXISTING}"
|
||||
else
|
||||
curl -fsS -X POST -H "Authorization: token $GITEA_TOKEN" -H "Content-Type: application/json" \
|
||||
"${API}/repos/${REPO}/issues" -d "$BODY_JSON" >/dev/null
|
||||
echo "Filed new issue"
|
||||
fi
|
||||
|
||||
# Belt-and-braces: if teardown left anything behind, nuke it here
|
||||
# so we don't bleed staging quota.
|
||||
- name: Teardown safety net
|
||||
if: always()
|
||||
env:
|
||||
ADMIN_TOKEN: ${{ secrets.MOLECULE_STAGING_ADMIN_TOKEN }}
|
||||
run: |
|
||||
set +e
|
||||
orgs=$(curl -sS "$MOLECULE_CP_URL/cp/admin/orgs" \
|
||||
-H "Authorization: Bearer $ADMIN_TOKEN" 2>/dev/null \
|
||||
| python3 -c "
|
||||
import json, sys
|
||||
d = json.load(sys.stdin)
|
||||
today = __import__('datetime').date.today().strftime('%Y%m%d')
|
||||
candidates = [o['slug'] for o in d.get('orgs', [])
|
||||
if o.get('slug','').startswith(f'e2e-canary-{today}-sanity-')
|
||||
and o.get('status') not in ('purged',)]
|
||||
print('\n'.join(candidates))
|
||||
" 2>/dev/null)
|
||||
leaks=()
|
||||
for slug in $orgs; do
|
||||
# Tempfile-routed -w + set +e/-e prevents curl-exit-code
|
||||
# pollution of the captured status (lint-curl-status-capture.yml).
|
||||
set +e
|
||||
curl -sS -o /tmp/sanity-cleanup.out -w "%{http_code}" \
|
||||
-X DELETE "$MOLECULE_CP_URL/cp/admin/tenants/$slug" \
|
||||
-H "Authorization: Bearer $ADMIN_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{\"confirm\":\"$slug\"}" >/tmp/sanity-cleanup.code
|
||||
set -e
|
||||
code=$(cat /tmp/sanity-cleanup.code 2>/dev/null || echo "000")
|
||||
if [ "$code" = "200" ] || [ "$code" = "204" ]; then
|
||||
echo "[teardown] deleted $slug (HTTP $code)"
|
||||
else
|
||||
echo "::warning::sanity teardown for $slug returned HTTP $code — sweep-stale-e2e-orgs will catch it within ~45 min. Body: $(head -c 300 /tmp/sanity-cleanup.out 2>/dev/null)"
|
||||
leaks+=("$slug")
|
||||
fi
|
||||
done
|
||||
if [ ${#leaks[@]} -gt 0 ]; then
|
||||
echo "::warning::sanity teardown left ${#leaks[@]} leak(s): ${leaks[*]}"
|
||||
fi
|
||||
exit 0
|
||||
282
.gitea/workflows/handlers-postgres-integration.yml
Normal file
282
.gitea/workflows/handlers-postgres-integration.yml
Normal file
@ -0,0 +1,282 @@
|
||||
name: Handlers Postgres Integration
|
||||
|
||||
# Ported from .github/workflows/handlers-postgres-integration.yml on 2026-05-11 per RFC
|
||||
# internal#219 §1 sweep. Differences from the GitHub version:
|
||||
# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them
|
||||
# per feedback_gitea_workflow_dispatch_inputs_unsupported).
|
||||
# - Dropped `merge_group:` (no Gitea merge queue).
|
||||
# - Dropped `environment:` blocks (Gitea has no environments).
|
||||
# - Workflow-level env.GITHUB_SERVER_URL pinned per
|
||||
# feedback_act_runner_github_server_url.
|
||||
# - `continue-on-error: true` on each job (RFC §1 contract).
|
||||
#
|
||||
|
||||
# Real-Postgres integration tests for workspace-server/internal/handlers/.
|
||||
# Triggered on every PR/push that touches the handlers package.
|
||||
#
|
||||
# Why this workflow exists
|
||||
# ------------------------
|
||||
# Strict-sqlmock unit tests pin which SQL statements fire — they're fast
|
||||
# and let us iterate without a DB. But sqlmock CANNOT detect bugs that
|
||||
# depend on the row state AFTER the SQL runs. The result_preview-lost
|
||||
# bug shipped to staging in PR #2854 because every unit test was
|
||||
# satisfied with "an UPDATE statement fired" — none verified the row's
|
||||
# preview field actually landed. The local-postgres E2E that retrofit
|
||||
# self-review caught it took 2 minutes to set up and would have caught
|
||||
# the bug at PR-time.
|
||||
#
|
||||
# Why this workflow does NOT use `services: postgres:` (Class B fix)
|
||||
# ------------------------------------------------------------------
|
||||
# Our act_runner config has `container.network: host` (operator host
|
||||
# /opt/molecule/runners/config.yaml), which act_runner applies to BOTH
|
||||
# the job container AND every service container. With host-net, two
|
||||
# concurrent runs of this workflow both try to bind 0.0.0.0:5432 — the
|
||||
# second postgres FATALs with `could not create any TCP/IP sockets:
|
||||
# Address in use`, and Docker auto-removes it (act_runner sets
|
||||
# AutoRemove:true on service containers). By the time the migrations
|
||||
# step runs `psql`, the postgres container is gone, hence
|
||||
# `Connection refused` then `failed to remove container: No such
|
||||
# container` at cleanup time.
|
||||
#
|
||||
# Per-job `container.network` override is silently ignored by
|
||||
# act_runner — `--network and --net in the options will be ignored.`
|
||||
# appears in the runner log. Documented constraint.
|
||||
#
|
||||
# So we sidestep `services:` entirely. The job container still uses
|
||||
# host-net (inherited from runner config; required for cache server
|
||||
# discovery on the bridge IP 172.18.0.17:42631). We launch a sibling
|
||||
# postgres on the existing `molecule-core-net` bridge with a
|
||||
# UNIQUE name per run — `pg-handlers-${RUN_ID}-${RUN_ATTEMPT}` — and
|
||||
# read its bridge IP via `docker inspect`. A host-net job container
|
||||
# can reach a bridge-net container directly via the bridge IP (verified
|
||||
# manually on operator host 2026-05-08).
|
||||
#
|
||||
# Trade-offs vs. the original `services:` shape:
|
||||
# + No host-port collision; N parallel runs share the bridge cleanly
|
||||
# + `if: always()` cleanup runs even on test-step failure
|
||||
# - One more step in the workflow (+~3 lines)
|
||||
# - Requires `molecule-core-net` to exist on the operator host
|
||||
# (it does; declared in docker-compose.yml + docker-compose.infra.yml)
|
||||
#
|
||||
# Class B Hongming-owned CICD red sweep, 2026-05-08.
|
||||
#
|
||||
# Cost: ~30s job (postgres pull from cache + go build + 4 tests).
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main, staging]
|
||||
pull_request:
|
||||
branches: [main, staging]
|
||||
concurrency:
|
||||
group: handlers-pg-integ-${{ github.event.pull_request.head.sha || github.sha }}
|
||||
cancel-in-progress: false
|
||||
|
||||
env:
|
||||
GITHUB_SERVER_URL: https://git.moleculesai.app
|
||||
|
||||
jobs:
|
||||
detect-changes:
|
||||
name: detect-changes
|
||||
runs-on: ubuntu-latest
|
||||
# Phase 3 (RFC #219 §1): surface broken workflows without blocking.
|
||||
continue-on-error: true
|
||||
outputs:
|
||||
handlers: ${{ steps.filter.outputs.handlers }}
|
||||
steps:
|
||||
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- id: filter
|
||||
# Inline replacement for dorny/paths-filter — see e2e-api.yml.
|
||||
run: |
|
||||
BASE="${GITHUB_BASE_REF:-${{ github.event.before }}}"
|
||||
if [ "${{ github.event_name }}" = "pull_request" ] && [ -n "${{ github.event.pull_request.base.sha }}" ]; then
|
||||
BASE="${{ github.event.pull_request.base.sha }}"
|
||||
fi
|
||||
if [ -z "$BASE" ] || echo "$BASE" | grep -qE '^0+$'; then
|
||||
echo "handlers=true" >> "$GITHUB_OUTPUT"
|
||||
exit 0
|
||||
fi
|
||||
if ! git cat-file -e "$BASE" 2>/dev/null; then
|
||||
git fetch --depth=1 origin "$BASE" 2>/dev/null || true
|
||||
fi
|
||||
if ! git cat-file -e "$BASE" 2>/dev/null; then
|
||||
echo "handlers=true" >> "$GITHUB_OUTPUT"
|
||||
exit 0
|
||||
fi
|
||||
CHANGED=$(git diff --name-only "$BASE" HEAD)
|
||||
if echo "$CHANGED" | grep -qE '^(workspace-server/internal/handlers/|workspace-server/internal/wsauth/|workspace-server/migrations/|\.gitea/workflows/handlers-postgres-integration\.yml$)'; then
|
||||
echo "handlers=true" >> "$GITHUB_OUTPUT"
|
||||
else
|
||||
echo "handlers=false" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
|
||||
# Single-job-with-per-step-if pattern: always runs to satisfy the
|
||||
# required-check name on branch protection; real work gates on the
|
||||
# paths filter. See ci.yml's Platform (Go) for the same shape.
|
||||
integration:
|
||||
name: Handlers Postgres Integration
|
||||
needs: detect-changes
|
||||
runs-on: ubuntu-latest
|
||||
# Phase 3 (RFC #219 §1): surface broken workflows without blocking.
|
||||
continue-on-error: true
|
||||
env:
|
||||
# Unique name per run so concurrent jobs don't collide on the
|
||||
# bridge network. ${RUN_ID}-${RUN_ATTEMPT} is unique even across
|
||||
# workflow_dispatch reruns of the same run_id.
|
||||
PG_NAME: pg-handlers-${{ github.run_id }}-${{ github.run_attempt }}
|
||||
# Bridge network already exists on the operator host (declared
|
||||
# in docker-compose.yml + docker-compose.infra.yml).
|
||||
PG_NETWORK: molecule-core-net
|
||||
defaults:
|
||||
run:
|
||||
working-directory: workspace-server
|
||||
steps:
|
||||
- if: needs.detect-changes.outputs.handlers != 'true'
|
||||
working-directory: .
|
||||
run: echo "No handlers/migrations changes — skipping; this job always runs to satisfy the required-check name."
|
||||
|
||||
- if: needs.detect-changes.outputs.handlers == 'true'
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
|
||||
- if: needs.detect-changes.outputs.handlers == 'true'
|
||||
uses: actions/setup-go@40f1582b2485089dde7abd97c1529aa768e1baff # v5
|
||||
with:
|
||||
go-version: 'stable'
|
||||
|
||||
- if: needs.detect-changes.outputs.handlers == 'true'
|
||||
name: Start sibling Postgres on bridge network
|
||||
working-directory: .
|
||||
run: |
|
||||
# Sanity: the bridge network must exist on the operator host.
|
||||
# Hard-fail loud if it doesn't — easier to spot than a silent
|
||||
# auto-create that diverges from the rest of the stack.
|
||||
if ! docker network inspect "${PG_NETWORK}" >/dev/null 2>&1; then
|
||||
echo "::error::Bridge network '${PG_NETWORK}' missing on operator host. Re-run docker-compose.infra.yml or check ops handbook."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# If a stale container with the same name exists (rerun on
|
||||
# the same run_id), wipe it first.
|
||||
docker rm -f "${PG_NAME}" >/dev/null 2>&1 || true
|
||||
|
||||
docker run -d \
|
||||
--name "${PG_NAME}" \
|
||||
--network "${PG_NETWORK}" \
|
||||
--health-cmd "pg_isready -U postgres" \
|
||||
--health-interval 5s \
|
||||
--health-timeout 5s \
|
||||
--health-retries 10 \
|
||||
-e POSTGRES_PASSWORD=test \
|
||||
-e POSTGRES_DB=molecule \
|
||||
postgres:15-alpine >/dev/null
|
||||
|
||||
# Read back the bridge IP. Always present immediately after
|
||||
# `docker run -d` for bridge networks.
|
||||
PG_HOST=$(docker inspect "${PG_NAME}" \
|
||||
--format "{{(index .NetworkSettings.Networks \"${PG_NETWORK}\").IPAddress}}")
|
||||
if [ -z "${PG_HOST}" ]; then
|
||||
echo "::error::Could not resolve PG_HOST for ${PG_NAME} on ${PG_NETWORK}"
|
||||
docker logs "${PG_NAME}" || true
|
||||
exit 1
|
||||
fi
|
||||
echo "PG_HOST=${PG_HOST}" >> "$GITHUB_ENV"
|
||||
echo "INTEGRATION_DB_URL=postgres://postgres:test@${PG_HOST}:5432/molecule?sslmode=disable" >> "$GITHUB_ENV"
|
||||
echo "Started ${PG_NAME} at ${PG_HOST}:5432"
|
||||
|
||||
- if: needs.detect-changes.outputs.handlers == 'true'
|
||||
name: Apply migrations to Postgres service
|
||||
env:
|
||||
PGPASSWORD: test
|
||||
run: |
|
||||
# Wait for postgres to actually accept connections. Docker's
|
||||
# health-cmd handles container-side readiness, but the wire
|
||||
# to the bridge IP is best-tested with pg_isready directly.
|
||||
for i in {1..15}; do
|
||||
if pg_isready -h "${PG_HOST}" -p 5432 -U postgres -q; then break; fi
|
||||
echo "waiting for postgres at ${PG_HOST}:5432..."; sleep 2
|
||||
done
|
||||
|
||||
# Apply every .up.sql in lexicographic order with
|
||||
# ON_ERROR_STOP=0 — failing migrations are SKIPPED rather than
|
||||
# blocking the suite. This handles the current schema state
|
||||
# where a few historical migrations (e.g. 017_memories_fts_*)
|
||||
# depend on tables that were later renamed/dropped and so
|
||||
# cannot replay from scratch. The migrations that DO succeed
|
||||
# land their tables, which is sufficient for the integration
|
||||
# tests in handlers/.
|
||||
#
|
||||
# Why not maintain a curated allowlist: every new migration
|
||||
# touching a handlers/-tested table would have to update this
|
||||
# workflow. With apply-all-or-skip, a future migration that
|
||||
# adds a column to delegations runs automatically (its base
|
||||
# table 049_delegations.up.sql already succeeded above it in
|
||||
# the order). Operators only need to revisit this if the
|
||||
# migration chain becomes legitimately replayable end-to-end.
|
||||
#
|
||||
# Per-migration result is logged so a failed migration that
|
||||
# SHOULD have been replayable surfaces in the CI log instead
|
||||
# of silently failing.
|
||||
# Apply both *.sql (legacy, lives next to its module) and
|
||||
# *.up.sql (newer up/down convention) in a single
|
||||
# lexicographically-sorted pass. Excluding *.down.sql so the
|
||||
# newest-naming-convention pairs don't undo themselves mid-run.
|
||||
# Pre-#149-followup this loop only globbed *.up.sql, which
|
||||
# silently skipped 001_workspaces.sql + 009_activity_logs.sql
|
||||
# — fine while no integration test depended on those tables,
|
||||
# not fine once a cross-table atomicity test came in.
|
||||
set +e
|
||||
for migration in $(ls migrations/*.sql 2>/dev/null | grep -v '\.down\.sql$' | sort); do
|
||||
if psql -h "${PG_HOST}" -U postgres -d molecule -v ON_ERROR_STOP=1 \
|
||||
-f "$migration" >/dev/null 2>&1; then
|
||||
echo "✓ $(basename "$migration")"
|
||||
else
|
||||
echo "⊘ $(basename "$migration") (skipped — see comment in workflow)"
|
||||
fi
|
||||
done
|
||||
set -e
|
||||
|
||||
# Sanity: the delegations + workspaces + activity_logs tables
|
||||
# MUST exist for the integration tests to be meaningful. Hard-
|
||||
# fail if any didn't land — that would be a real regression we
|
||||
# want loud.
|
||||
for tbl in delegations workspaces activity_logs pending_uploads; do
|
||||
if ! psql -h "${PG_HOST}" -U postgres -d molecule -tA \
|
||||
-c "SELECT 1 FROM information_schema.tables WHERE table_name = '$tbl'" \
|
||||
| grep -q 1; then
|
||||
echo "::error::$tbl table missing after migration replay — handler integration tests would be meaningless"
|
||||
exit 1
|
||||
fi
|
||||
echo "✓ $tbl table present"
|
||||
done
|
||||
|
||||
- if: needs.detect-changes.outputs.handlers == 'true'
|
||||
name: Run integration tests
|
||||
run: |
|
||||
# INTEGRATION_DB_URL is exported by the start-postgres step;
|
||||
# points at the per-run bridge IP, not 127.0.0.1, so concurrent
|
||||
# workflow runs don't fight over a host-net 5432 port.
|
||||
go test -tags=integration -timeout 5m -v ./internal/handlers/ -run "^TestIntegration_"
|
||||
|
||||
- if: failure() && needs.detect-changes.outputs.handlers == 'true'
|
||||
name: Diagnostic dump on failure
|
||||
env:
|
||||
PGPASSWORD: test
|
||||
run: |
|
||||
echo "::group::postgres container status"
|
||||
docker ps -a --filter "name=${PG_NAME}" --format '{{.Status}} {{.Names}}' || true
|
||||
docker logs "${PG_NAME}" 2>&1 | tail -50 || true
|
||||
echo "::endgroup::"
|
||||
echo "::group::delegations table state"
|
||||
psql -h "${PG_HOST}" -U postgres -d molecule -c "SELECT * FROM delegations LIMIT 50;" || true
|
||||
echo "::endgroup::"
|
||||
|
||||
- if: always() && needs.detect-changes.outputs.handlers == 'true'
|
||||
name: Stop sibling Postgres
|
||||
working-directory: .
|
||||
run: |
|
||||
# always() so containers don't leak when migrations or tests
|
||||
# fail. The cleanup is best-effort: if the container is
|
||||
# already gone (e.g. concurrent rerun race), don't fail the job.
|
||||
docker rm -f "${PG_NAME}" >/dev/null 2>&1 || true
|
||||
echo "Cleaned up ${PG_NAME}"
|
||||
262
.gitea/workflows/harness-replays.yml
Normal file
262
.gitea/workflows/harness-replays.yml
Normal file
@ -0,0 +1,262 @@
|
||||
name: Harness Replays
|
||||
|
||||
# Ported from .github/workflows/harness-replays.yml on 2026-05-11 per RFC
|
||||
# internal#219 §1 sweep. Differences from the GitHub version:
|
||||
# - Dropped `workflow_dispatch.inputs` (Gitea 1.22.6 parser rejects them
|
||||
# per feedback_gitea_workflow_dispatch_inputs_unsupported).
|
||||
# - Dropped `merge_group:` (no Gitea merge queue).
|
||||
# - Dropped `environment:` blocks (Gitea has no environments).
|
||||
# - Workflow-level env.GITHUB_SERVER_URL pinned per
|
||||
# feedback_act_runner_github_server_url.
|
||||
# - `continue-on-error: true` on each job (RFC §1 contract).
|
||||
#
|
||||
|
||||
# Boots tests/harness (production-shape compose topology with TenantGuard,
|
||||
# /cp/* proxy, canvas proxy, real production Dockerfile.tenant) and runs
|
||||
# every replay under tests/harness/replays/. Fails the PR if any replay
|
||||
# fails.
|
||||
#
|
||||
# Why this exists: 2026-04-30 we shipped #2398 which added /buildinfo as
|
||||
# a public route in router.go but forgot to add it to TenantGuard's
|
||||
# allowlist. The handler-level test in buildinfo_test.go constructed a
|
||||
# minimal gin engine without TenantGuard — green. The harness's
|
||||
# buildinfo-stale-image.sh replay would have caught it (cf-proxy doesn't
|
||||
# inject X-Molecule-Org-Id, so the curl path is identical to production's
|
||||
# redeploy verifier), but no one ran the harness pre-merge. The bug
|
||||
# shipped; the redeploy verifier silently soft-warned every tenant as
|
||||
# "unreachable" for ~1 day before being noticed.
|
||||
#
|
||||
# This gate makes "did you actually run the harness?" a CI invariant
|
||||
# instead of a memory-discipline thing.
|
||||
#
|
||||
# Trigger model — match e2e-api.yml: always FIRES on push/pull_request
|
||||
# to staging+main, real work is gated per-step on detect-changes output.
|
||||
# One job → one check run → branch-protection-clean (the SKIPPED-in-set
|
||||
# trap from PR #2264 is documented in e2e-api.yml's e2e-api job comment).
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main, staging]
|
||||
paths:
|
||||
- 'workspace-server/**'
|
||||
- 'canvas/**'
|
||||
- 'tests/harness/**'
|
||||
- '.gitea/workflows/harness-replays.yml'
|
||||
pull_request:
|
||||
branches: [main, staging]
|
||||
paths:
|
||||
- 'workspace-server/**'
|
||||
- 'canvas/**'
|
||||
- 'tests/harness/**'
|
||||
- '.gitea/workflows/harness-replays.yml'
|
||||
concurrency:
|
||||
# Per-SHA grouping. Per-ref kept hitting the auto-promote-staging
|
||||
# cancellation deadlock — see e2e-api.yml's concurrency block for
|
||||
# the 2026-04-28 incident that codified this pattern.
|
||||
group: harness-replays-${{ github.event.pull_request.head.sha || github.sha }}
|
||||
cancel-in-progress: false
|
||||
|
||||
env:
|
||||
GITHUB_SERVER_URL: https://git.moleculesai.app
|
||||
|
||||
jobs:
|
||||
detect-changes:
|
||||
runs-on: ubuntu-latest
|
||||
# Phase 3 (RFC #219 §1): surface broken workflows without blocking.
|
||||
continue-on-error: true
|
||||
outputs:
|
||||
run: ${{ steps.decide.outputs.run }}
|
||||
steps:
|
||||
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
- id: decide
|
||||
run: |
|
||||
# workflow_dispatch: always run (manual trigger)
|
||||
if [ "${{ github.event_name }}" = "workflow_dispatch" ]; then
|
||||
echo "run=true" >> "$GITHUB_OUTPUT"
|
||||
echo "debug=manual-trigger" >> "$GITHUB_OUTPUT"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Determine the base commit to diff against.
|
||||
# For pull_request: use base.sha (the merge-base with main/staging).
|
||||
# For push: use github.event.before (the previous tip of the branch).
|
||||
# Fallback for new branches (all-zeros SHA): run everything.
|
||||
if [ "${{ github.event_name }}" = "pull_request" ] && \
|
||||
[ -n "${{ github.event.pull_request.base.sha }}" ]; then
|
||||
BASE="${{ github.event.pull_request.base.sha }}"
|
||||
elif [ -n "${{ github.event.before }}" ] && \
|
||||
! echo "${{ github.event.before }}" | grep -qE '^0+$'; then
|
||||
BASE="${{ github.event.before }}"
|
||||
else
|
||||
# New branch or github.event.before unavailable — run everything.
|
||||
echo "run=true" >> "$GITHUB_OUTPUT"
|
||||
echo "debug=new-branch-fallback" >> "$GITHUB_OUTPUT"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# GitHub Actions and Gitea Actions both expose github.sha for HEAD.
|
||||
DIFF=$(git diff --name-only "$BASE" "${{ github.sha }}" 2>/dev/null)
|
||||
echo "debug=diff-base=$BASE diff-files=$DIFF" >> "$GITHUB_OUTPUT"
|
||||
|
||||
if echo "$DIFF" | grep -qE '^workspace-server/|^canvas/|^tests/harness/|^.gitea/workflows/harness-replays\.yml$'; then
|
||||
echo "run=true" >> "$GITHUB_OUTPUT"
|
||||
else
|
||||
echo "run=false" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
|
||||
# ONE job that always runs. Real work is gated per-step on
|
||||
# detect-changes.outputs.run so an unrelated PR (e.g. doc-only
|
||||
# change to molecule-controlplane wired here later) emits the
|
||||
# required check without spending CI cycles. Single-job pattern
|
||||
# matches e2e-api.yml — see that workflow's comment for why a
|
||||
# job-level `if: false` would block branch protection via the
|
||||
# SKIPPED-in-set bug.
|
||||
harness-replays:
|
||||
needs: detect-changes
|
||||
name: Harness Replays
|
||||
runs-on: ubuntu-latest
|
||||
# Phase 3 (RFC #219 §1): surface broken workflows without blocking.
|
||||
continue-on-error: true
|
||||
timeout-minutes: 30
|
||||
steps:
|
||||
- name: No-op pass (paths filter excluded this commit)
|
||||
if: needs.detect-changes.outputs.run != 'true'
|
||||
run: |
|
||||
echo "No workspace-server / canvas / tests/harness / workflow changes — Harness Replays gate satisfied without running."
|
||||
echo "::notice::Harness Replays no-op pass (paths filter excluded this commit)."
|
||||
echo "::notice::Debug: ${{ needs.detect-changes.outputs.debug }}"
|
||||
|
||||
- if: needs.detect-changes.outputs.run == 'true'
|
||||
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
||||
|
||||
# Log what files were detected so future failures include the diff.
|
||||
- name: Log detected changes
|
||||
if: needs.detect-changes.outputs.run == 'true'
|
||||
run: |
|
||||
echo "::notice::detect-changes debug: ${{ needs.detect-changes.outputs.debug }}"
|
||||
|
||||
# github-app-auth sibling-checkout removed 2026-05-07 (#157):
|
||||
# the plugin was dropped + Dockerfile.tenant no longer COPYs it.
|
||||
|
||||
# Pre-clone manifest deps before docker compose builds the tenant
|
||||
# image (Task #173 followup — same pattern as
|
||||
# publish-workspace-server-image.yml's "Pre-clone manifest deps"
|
||||
# step).
|
||||
#
|
||||
# Why pre-clone here too: tests/harness/compose.yml builds tenant-alpha
|
||||
# and tenant-beta from workspace-server/Dockerfile.tenant with
|
||||
# context=../.. (repo root). That Dockerfile expects
|
||||
# .tenant-bundle-deps/{workspace-configs-templates,org-templates,plugins}
|
||||
# to be present at build context root (post-#173 it COPYs from there
|
||||
# instead of running an in-image clone — the in-image clone failed
|
||||
# with "could not read Username for https://git.moleculesai.app"
|
||||
# because there's no auth path inside the build sandbox).
|
||||
#
|
||||
# Without this step harness-replays fails before any replay runs,
|
||||
# with `failed to calculate checksum of ref ...
|
||||
# "/.tenant-bundle-deps/plugins": not found`. Caught by run #892
|
||||
# (main, 2026-05-07T20:28:53Z) and run #964 (staging — same
|
||||
# symptom, different root cause: staging still has the in-image
|
||||
# clone path, hits the auth error directly).
|
||||
#
|
||||
# 2026-05-08 sub-finding (#192): the clone step ALSO fails when
|
||||
# any referenced workspace-template repo is private and the
|
||||
# AUTO_SYNC_TOKEN bearer (devops-engineer persona) lacks read
|
||||
# access. Root cause: 5 of 9 workspace-template repos
|
||||
# (openclaw, codex, crewai, deepagents, gemini-cli) had been
|
||||
# marked private with no team grant. Resolution: flipped them
|
||||
# to public per `feedback_oss_first_repo_visibility_default`
|
||||
# (the OSS surface should be public). Layer-3 (customer-private +
|
||||
# marketplace third-party repos) tracked separately in
|
||||
# internal#102.
|
||||
#
|
||||
# Token shape matches publish-workspace-server-image.yml: AUTO_SYNC_TOKEN
|
||||
# is the devops-engineer persona PAT, NOT the founder PAT (per
|
||||
# `feedback_per_agent_gitea_identity_default`). clone-manifest.sh
|
||||
# embeds it as basic-auth for the duration of the clones and strips
|
||||
# .git directories — the token never enters the resulting image.
|
||||
- name: Pre-clone manifest deps
|
||||
if: needs.detect-changes.outputs.run == 'true'
|
||||
env:
|
||||
MOLECULE_GITEA_TOKEN: ${{ secrets.AUTO_SYNC_TOKEN }}
|
||||
run: |
|
||||
set -euo pipefail
|
||||
if [ -z "${MOLECULE_GITEA_TOKEN}" ]; then
|
||||
echo "::error::AUTO_SYNC_TOKEN secret is empty — register the devops-engineer persona PAT in repo Actions secrets"
|
||||
exit 1
|
||||
fi
|
||||
mkdir -p .tenant-bundle-deps
|
||||
bash scripts/clone-manifest.sh \
|
||||
manifest.json \
|
||||
.tenant-bundle-deps/workspace-configs-templates \
|
||||
.tenant-bundle-deps/org-templates \
|
||||
.tenant-bundle-deps/plugins
|
||||
# Sanity-check counts so a silent partial clone fails fast
|
||||
# instead of producing a half-empty image.
|
||||
ws_count=$(find .tenant-bundle-deps/workspace-configs-templates -mindepth 1 -maxdepth 1 -type d | wc -l)
|
||||
org_count=$(find .tenant-bundle-deps/org-templates -mindepth 1 -maxdepth 1 -type d | wc -l)
|
||||
plugins_count=$(find .tenant-bundle-deps/plugins -mindepth 1 -maxdepth 1 -type d | wc -l)
|
||||
echo "Cloned: ws=$ws_count org=$org_count plugins=$plugins_count"
|
||||
|
||||
- name: Install Python deps for replays
|
||||
# peer-discovery-404 (and future replays) eval Python against the
|
||||
# running tenant — importing workspace/a2a_client.py pulls in
|
||||
# httpx. tests/harness/requirements.txt holds just the HTTP-client
|
||||
# surface to keep CI install fast (~3s) vs the full
|
||||
# workspace/requirements.txt (~30s).
|
||||
if: needs.detect-changes.outputs.run == 'true'
|
||||
run: pip install -r tests/harness/requirements.txt
|
||||
|
||||
- name: Run all replays against the harness
|
||||
# run-all-replays.sh: boot via up.sh → seed via seed.sh → run
|
||||
# every replays/*.sh → tear down via down.sh on EXIT (trap).
|
||||
# Non-zero exit on any replay failure.
|
||||
#
|
||||
# KEEP_UP=1: without this, the script's trap-on-EXIT tears
|
||||
# down containers immediately on failure, leaving the dump
|
||||
# step below with nothing to dump (verified on PR #2410's
|
||||
# first run — tenant became unhealthy, trap fired, dump
|
||||
# step saw empty containers). Keeping them up lets the
|
||||
# failure path collect tenant/cp-stub/cf-proxy logs. The
|
||||
# always-run "Force teardown" step does the actual cleanup.
|
||||
if: needs.detect-changes.outputs.run == 'true'
|
||||
working-directory: tests/harness
|
||||
env:
|
||||
KEEP_UP: "1"
|
||||
run: ./run-all-replays.sh
|
||||
|
||||
- name: Dump compose logs on failure
|
||||
# SECRETS_ENCRYPTION_KEY: docker compose validates the entire compose
|
||||
# file even for read-only `logs` calls. up.sh generates a per-run key
|
||||
# and exports it to its OWN shell — this step runs in a fresh shell
|
||||
# that wouldn't see it, so without a placeholder the validate step
|
||||
# errors before logs print (verified against PR #2492's first run:
|
||||
# "required variable SECRETS_ENCRYPTION_KEY is missing a value").
|
||||
# A placeholder is fine — we're only reading log streams, not booting.
|
||||
if: failure() && needs.detect-changes.outputs.run == 'true'
|
||||
working-directory: tests/harness
|
||||
env:
|
||||
SECRETS_ENCRYPTION_KEY: dump-logs-placeholder
|
||||
run: |
|
||||
echo "=== docker compose ps ==="
|
||||
docker compose -f compose.yml ps || true
|
||||
echo "=== tenant-alpha logs ==="
|
||||
docker compose -f compose.yml logs tenant-alpha || true
|
||||
echo "=== tenant-beta logs ==="
|
||||
docker compose -f compose.yml logs tenant-beta || true
|
||||
echo "=== cp-stub logs ==="
|
||||
docker compose -f compose.yml logs cp-stub || true
|
||||
echo "=== cf-proxy logs ==="
|
||||
docker compose -f compose.yml logs cf-proxy || true
|
||||
echo "=== postgres-alpha logs (last 100) ==="
|
||||
docker compose -f compose.yml logs --tail 100 postgres-alpha || true
|
||||
echo "=== postgres-beta logs (last 100) ==="
|
||||
docker compose -f compose.yml logs --tail 100 postgres-beta || true
|
||||
|
||||
- name: Force teardown
|
||||
# We pass KEEP_UP=1 to run-all-replays.sh so the dump step
|
||||
# above sees real containers — that means we own teardown
|
||||
# explicitly here. Always run.
|
||||
if: always() && needs.detect-changes.outputs.run == 'true'
|
||||
working-directory: tests/harness
|
||||
run: ./down.sh || true
|
||||
Loading…
Reference in New Issue
Block a user