ci: auto deploy production tenants after green main
Some checks failed
sop-checklist / all-items-acked (pull_request) [info tier:low] acked: 0/7 — missing: comprehensive-testing, local-postgres-e2e, staging-smoke, +4
Block internal-flavored paths / Block forbidden paths (pull_request) Successful in 24s
CI / Detect changes (pull_request) Successful in 56s
Harness Replays / detect-changes (pull_request) Successful in 18s
E2E API Smoke Test / detect-changes (pull_request) Successful in 1m1s
Lint curl status-code capture / Scan workflows for curl status-capture pollution (pull_request) Successful in 19s
E2E Staging Canvas (Playwright) / detect-changes (pull_request) Successful in 1m1s
Handlers Postgres Integration / detect-changes (pull_request) Successful in 54s
Secret scan / Scan diff for credential-shaped strings (pull_request) Successful in 16s
Runtime PR-Built Compatibility / detect-changes (pull_request) Successful in 44s
qa-review / approved (pull_request) Failing after 14s
lint-required-no-paths / lint-required-no-paths (pull_request) Successful in 1m18s
security-review / approved (pull_request) Failing after 13s
Lint workflow YAML (Gitea-1.22.6-hostile shapes) / Lint workflow YAML for Gitea-1.22.6-hostile shapes (pull_request) Successful in 1m34s
lint-continue-on-error-tracking / lint-continue-on-error-tracking (pull_request) Successful in 2m4s
sop-tier-check / tier-check (pull_request) Successful in 19s
sop-checklist-gate / gate (pull_request) Successful in 20s
lint-required-context-exists-in-bp / lint-required-context-exists-in-bp (pull_request) Failing after 1m55s
gate-check-v3 / gate-check (pull_request) Failing after 25s
Lint pre-flip continue-on-error / Verify continue-on-error flips have run-log proof (pull_request) Successful in 2m9s
Ops Scripts Tests / Ops scripts (unittest) (pull_request) Successful in 1m21s
CI / Platform (Go) (pull_request) Successful in 5s
CI / Shellcheck (E2E scripts) (pull_request) Successful in 3s
CI / Python Lint & Test (pull_request) Successful in 4s
Harness Replays / Harness Replays (pull_request) Successful in 5s
E2E API Smoke Test / E2E API Smoke Test (pull_request) Successful in 4s
Handlers Postgres Integration / Handlers Postgres Integration (pull_request) Successful in 3s
Runtime PR-Built Compatibility / PR-built wheel + import smoke (pull_request) Successful in 5s
E2E Staging Canvas (Playwright) / Canvas tabs E2E (pull_request) Successful in 9m10s
CI / Canvas (Next.js) (pull_request) Successful in 13m55s
CI / Canvas Deploy Reminder (pull_request) Has been skipped
CI / all-required (pull_request) Successful in 4s

This commit is contained in:
hongming-codex-laptop 2026-05-13 02:36:16 -07:00
parent 98fe199de4
commit cb7bfe06a9
8 changed files with 961 additions and 88 deletions

View File

@ -29,6 +29,13 @@ Rules (4 fatal + 1 fatal cross-file + 1 heuristic-warn):
or `https://github.com/.../releases/download` without a
workflow-level `env.GITHUB_SERVER_URL` set to the Gitea instance.
Memory: feedback_act_runner_github_server_url.
7. Production deploy/redeploy workflows may not rely on Gitea
`concurrency.cancel-in-progress: false` for serialization. Gitea
1.22.6 can cancel queued runs despite that setting.
8. Production deploy/redeploy workflows may not dump raw CP responses or
raw `.error` fields into CI logs/summaries.
9. Production deploy/redeploy workflows must expose an operational control:
kill switch for auto deploys or rollback tag for manual deploys.
Per `feedback_smoke_test_vendor_truth_not_shape_match`: fixtures used to
validate this lint must mirror real Gitea 1.22.6 YAML semantics, not
@ -255,6 +262,19 @@ GITHUB_API_REF_RE = re.compile(
)
PROD_CP_URL_RE = re.compile(r"https://api\.moleculesai\.app\b")
REDEPLOY_FLEET_RE = re.compile(r"\b/cp/admin/tenants/redeploy-fleet\b")
RAW_CP_RESPONSE_RE = re.compile(
r"""(?x)
(?:\bjq\s+\.\s+["']?\$HTTP_RESPONSE["']?)
|
(?:\bcat\s+["']?\$HTTP_RESPONSE["']?)
|
(?:\|\s*\.error\b)
"""
)
def _has_workflow_level_server_url(doc: Any) -> bool:
if not isinstance(doc, dict):
return False
@ -286,6 +306,83 @@ def check_github_server_url_missing(filename: str, doc: Any, raw: str) -> list[s
return warns
# ---------------------------------------------------------------------------
# Rule 7-9 — production CI/CD hardening rules
# ---------------------------------------------------------------------------
def _is_production_redeploy_workflow(raw: str) -> bool:
"""Heuristic production-side-effect detector.
We intentionally key on the production CP host plus the redeploy-fleet
endpoint. Staging workflows call the same endpoint on staging-api and are
governed by looser staging verification policy.
"""
return bool(PROD_CP_URL_RE.search(raw) and REDEPLOY_FLEET_RE.search(raw))
def _iter_concurrency_blocks(doc: Any) -> Iterable[dict[str, Any]]:
if not isinstance(doc, dict):
return
top = doc.get("concurrency")
if isinstance(top, dict):
yield top
jobs = doc.get("jobs")
if not isinstance(jobs, dict):
return
for job in jobs.values():
if isinstance(job, dict) and isinstance(job.get("concurrency"), dict):
yield job["concurrency"]
def check_production_concurrency(filename: str, doc: Any, raw: str) -> list[str]:
errors: list[str] = []
if not _is_production_redeploy_workflow(raw):
return errors
for block in _iter_concurrency_blocks(doc):
if block.get("cancel-in-progress") is False:
errors.append(
f"::error file={filename}::Rule 7 (FATAL): production deploy "
f"workflow uses `concurrency.cancel-in-progress: false`. "
f"Gitea 1.22.6 can cancel queued runs despite that setting, "
f"so this is not a safe production serialization primitive. "
f"Use an external queue/lock or make the deploy idempotent."
)
return errors
def check_production_raw_response_logging(filename: str, raw: str) -> list[str]:
errors: list[str] = []
if not _is_production_redeploy_workflow(raw):
return errors
if RAW_CP_RESPONSE_RE.search(raw):
errors.append(
f"::error file={filename}::Rule 8 (FATAL): production deploy "
f"workflow appears to print a raw production CP response or raw "
f"`.error` field. CI logs are persistent and broad-read. Redact "
f"runtime/SSM error details; print counts, booleans, status "
f"codes, and links to restricted observability instead."
)
return errors
def check_production_operational_control(filename: str, raw: str) -> list[str]:
errors: list[str] = []
if not _is_production_redeploy_workflow(raw):
return errors
has_kill_switch = "PROD_AUTO_DEPLOY_DISABLED" in raw
has_rollback = "PROD_MANUAL_REDEPLOY_TARGET_TAG" in raw
if not (has_kill_switch or has_rollback):
errors.append(
f"::error file={filename}::Rule 9 (FATAL): production deploy "
f"workflow calls redeploy-fleet without an operational control. "
f"Auto deploys need a `PROD_AUTO_DEPLOY_DISABLED` kill switch; "
f"manual deploys need a `PROD_MANUAL_REDEPLOY_TARGET_TAG` "
f"rollback/pin path."
)
return errors
# ---------------------------------------------------------------------------
# Driver
# ---------------------------------------------------------------------------
@ -336,6 +433,9 @@ def main(argv: list[str] | None = None) -> int:
fatal_errors.extend(check_workflow_run_event(rel, doc))
fatal_errors.extend(check_name_with_slash(rel, doc))
fatal_errors.extend(check_cross_repo_uses(rel, doc))
fatal_errors.extend(check_production_concurrency(rel, doc, raw))
fatal_errors.extend(check_production_raw_response_logging(rel, raw))
fatal_errors.extend(check_production_operational_control(rel, raw))
warnings.extend(check_github_server_url_missing(rel, doc, raw))
# Cross-file checks

View File

@ -0,0 +1,251 @@
#!/usr/bin/env python3
"""Production auto-deploy helpers for Gitea Actions.
The workflow keeps network side effects in shell/curl, but centralizes the
release decision shape here so it has unit coverage: disable flag parsing,
target tag selection, CP payload construction, and status-context selection.
"""
from __future__ import annotations
import argparse
import json
import os
import sys
import time
import urllib.error
import urllib.request
from urllib.parse import quote
TRUE_VALUES = {"1", "true", "yes", "on", "disabled", "disable"}
PROD_CP_URL = "https://api.moleculesai.app"
DEFAULT_REQUIRED_CONTEXTS = [
"CI / Platform (Go) (push)",
"CI / Canvas (Next.js) (push)",
"CI / Shellcheck (E2E scripts) (push)",
"CI / Python Lint & Test (push)",
"CI / all-required (push)",
"Secret scan / Scan diff for credential-shaped strings (push)",
]
TERMINAL_FAILURE_STATES = {"failure", "error", "cancelled", "canceled", "skipped"}
def truthy_flag(value: str | None) -> bool:
if value is None:
return False
return value.strip().lower() in TRUE_VALUES
def _int_env(env: dict[str, str], name: str, default: int, minimum: int = 1) -> int:
raw = env.get(name, "")
if not raw:
return default
try:
value = int(raw)
except ValueError as exc:
raise ValueError(f"{name} must be an integer, got {raw!r}") from exc
if value < minimum:
raise ValueError(f"{name} must be >= {minimum}, got {value}")
return value
def build_plan(env: dict[str, str]) -> dict:
sha = env.get("GITHUB_SHA", "").strip()
if not sha:
raise ValueError("GITHUB_SHA is required")
disabled_value = env.get("PROD_AUTO_DEPLOY_DISABLED", "")
if truthy_flag(disabled_value):
return {
"enabled": False,
"sha": sha,
"disabled_reason": f"PROD_AUTO_DEPLOY_DISABLED={disabled_value}",
}
short_sha = sha[:7]
target_tag = env.get("PROD_AUTO_DEPLOY_TARGET_TAG", "").strip() or f"staging-{short_sha}"
canary_slug = env.get("PROD_AUTO_DEPLOY_CANARY_SLUG", "hongming").strip()
body = {
"target_tag": target_tag,
"soak_seconds": _int_env(env, "PROD_AUTO_DEPLOY_SOAK_SECONDS", 60, minimum=0),
"batch_size": _int_env(env, "PROD_AUTO_DEPLOY_BATCH_SIZE", 3),
"dry_run": truthy_flag(env.get("PROD_AUTO_DEPLOY_DRY_RUN", "")),
}
if canary_slug:
body["canary_slug"] = canary_slug
cp_url = env.get("CP_URL", "").strip() or PROD_CP_URL
if cp_url != PROD_CP_URL and not truthy_flag(env.get("PROD_ALLOW_NON_PROD_CP_URL", "")):
raise ValueError(
f"Refusing production deploy to CP_URL={cp_url!r}; "
f"set PROD_ALLOW_NON_PROD_CP_URL=true for an explicit non-prod drill"
)
return {
"enabled": True,
"sha": sha,
"short_sha": short_sha,
"target_tag": target_tag,
"cp_url": cp_url,
"body": body,
}
def latest_status_for_context(statuses: list[dict], context: str) -> dict | None:
"""Return the first matching status.
Gitea's combined-status response is newest-first in practice. The merge
queue relies on the same contract; keeping the selector explicit makes
stale duplicate contexts easy to test.
"""
for status in statuses:
if status.get("context") == context:
return status
return None
def ci_context_state(statuses: list[dict], context: str) -> str:
status = latest_status_for_context(statuses, context)
if not status:
return "missing"
return str(status.get("status") or status.get("state") or "missing").lower()
def context_is_satisfied(state: str) -> bool:
return state == "success"
def context_is_terminal_failure(state: str) -> bool:
return state in TERMINAL_FAILURE_STATES
def required_contexts(env: dict[str, str]) -> list[str]:
raw = env.get("PROD_AUTO_DEPLOY_REQUIRED_CONTEXTS", "")
if not raw.strip():
return DEFAULT_REQUIRED_CONTEXTS
return [line.strip() for line in raw.replace(",", "\n").splitlines() if line.strip()]
def _api_json(url: str, token: str) -> dict:
req = urllib.request.Request(url, headers={"Authorization": f"token {token}"})
try:
with urllib.request.urlopen(req, timeout=20) as resp:
return json.loads(resp.read())
except urllib.error.HTTPError as exc:
body = exc.read().decode("utf-8", errors="replace")[:500]
raise RuntimeError(f"GET {url} -> HTTP {exc.code}: {body}") from exc
def _api_json_optional(url: str, token: str) -> tuple[int, dict | None]:
req = urllib.request.Request(url, headers={"Authorization": f"token {token}"})
try:
with urllib.request.urlopen(req, timeout=20) as resp:
return resp.status, json.loads(resp.read())
except urllib.error.HTTPError as exc:
if exc.code == 404:
return exc.code, None
body = exc.read().decode("utf-8", errors="replace")[:300]
print(f"::warning::GET {url} -> HTTP {exc.code}: {body}", file=sys.stderr)
return exc.code, None
def live_disable_flag(env: dict[str, str]) -> str:
"""Return a live disable value from Gitea variables when readable.
Gitea evaluates `${{ vars.* }}` once when the job starts. This API read is
the emergency re-check immediately before production side effects.
"""
token = env.get("GITEA_TOKEN", "").strip()
if not token:
return ""
host = env.get("GITEA_HOST", "git.moleculesai.app")
repo = env.get("GITHUB_REPOSITORY", "molecule-ai/molecule-core")
variable = quote("PROD_AUTO_DEPLOY_DISABLED", safe="")
url = f"https://{host}/api/v1/repos/{repo}/actions/variables/{variable}"
status, body = _api_json_optional(url, token)
if status != 200 or not isinstance(body, dict):
return ""
return str(body.get("data") or body.get("value") or "")
def assert_not_disabled(env: dict[str, str]) -> None:
plan = build_plan(env)
if not plan.get("enabled"):
raise RuntimeError(plan.get("disabled_reason", "production auto-deploy disabled"))
live_value = live_disable_flag(env)
if truthy_flag(live_value):
raise RuntimeError(f"PROD_AUTO_DEPLOY_DISABLED={live_value} (live Gitea variable)")
def wait_for_ci_context(env: dict[str, str]) -> str:
host = env.get("GITEA_HOST", "git.moleculesai.app")
repo = env.get("GITHUB_REPOSITORY", "molecule-ai/molecule-core")
sha = env.get("GITHUB_SHA", "").strip()
token = env.get("GITEA_TOKEN", "").strip()
contexts = required_contexts(env)
interval = _int_env(env, "CI_STATUS_POLL_INTERVAL_SECONDS", 15)
timeout = _int_env(env, "CI_STATUS_TIMEOUT_SECONDS", 1800)
if not sha:
raise ValueError("GITHUB_SHA is required")
if not token:
raise ValueError("GITEA_TOKEN is required to wait for CI status")
url = f"https://{host}/api/v1/repos/{repo}/commits/{sha}/status"
deadline = time.time() + timeout
last_states: dict[str, str] = {}
while time.time() <= deadline:
body = _api_json(url, token)
statuses = body.get("statuses") or []
states = {context: ci_context_state(statuses, context) for context in contexts}
for context, state in states.items():
if state != last_states.get(context):
print(f"CI context {context!r}: {state}", file=sys.stderr)
last_states = states
failures = [
f"{context}={state}"
for context, state in states.items()
if context_is_terminal_failure(state)
]
if failures:
raise RuntimeError(
"Required CI context failed; refusing production deploy: "
+ ", ".join(failures)
)
if all(context_is_satisfied(state) for state in states.values()):
return "success"
time.sleep(interval)
last = ", ".join(f"{context}={state}" for context, state in last_states.items()) or "none"
raise TimeoutError(f"Timed out waiting {timeout}s for required CI contexts; last_states={last}")
def main() -> int:
parser = argparse.ArgumentParser(description=__doc__)
sub = parser.add_subparsers(dest="command", required=True)
sub.add_parser("plan", help="print production deploy plan as JSON")
sub.add_parser("assert-enabled", help="fail if production deploy is currently disabled")
sub.add_parser("wait-ci", help="block until required CI context is green")
args = parser.parse_args()
try:
if args.command == "plan":
print(json.dumps(build_plan(dict(os.environ)), sort_keys=True))
return 0
if args.command == "assert-enabled":
assert_not_disabled(dict(os.environ))
return 0
if args.command == "wait-ci":
wait_for_ci_context(dict(os.environ))
return 0
except Exception as exc: # noqa: BLE001 - CLI should render operator-friendly errors.
print(f"::error::{exc}", file=sys.stderr)
return 1
return 2
if __name__ == "__main__":
raise SystemExit(main())

View File

@ -0,0 +1,120 @@
import importlib.util
import sys
from pathlib import Path
SCRIPT = Path(__file__).resolve().parents[1] / "prod-auto-deploy.py"
spec = importlib.util.spec_from_file_location("prod_auto_deploy", SCRIPT)
prod = importlib.util.module_from_spec(spec)
sys.modules[spec.name] = prod
spec.loader.exec_module(prod)
def test_truthy_flag_accepts_operator_disable_values():
for value in ("1", "true", "TRUE", "yes", "on", "disabled", "disable"):
assert prod.truthy_flag(value) is True
for value in ("", "0", "false", "no", "off", None):
assert prod.truthy_flag(value) is False
def test_build_plan_defaults_to_staging_sha_target_and_prod_cp():
plan = prod.build_plan(
{
"GITHUB_SHA": "abcdef1234567890",
"PROD_AUTO_DEPLOY_DISABLED": "",
}
)
assert plan["enabled"] is True
assert plan["sha"] == "abcdef1234567890"
assert plan["target_tag"] == "staging-abcdef1"
assert plan["cp_url"] == "https://api.moleculesai.app"
assert plan["body"] == {
"target_tag": "staging-abcdef1",
"canary_slug": "hongming",
"soak_seconds": 60,
"batch_size": 3,
"dry_run": False,
}
def test_build_plan_rejects_non_prod_cp_without_explicit_override():
try:
prod.build_plan(
{
"GITHUB_SHA": "abcdef1234567890",
"CP_URL": "https://staging-api.moleculesai.app",
}
)
except ValueError as exc:
assert "PROD_ALLOW_NON_PROD_CP_URL=true" in str(exc)
else:
raise AssertionError("expected non-prod CP URL rejection")
def test_build_plan_allows_non_prod_cp_only_with_override():
plan = prod.build_plan(
{
"GITHUB_SHA": "abcdef1234567890",
"CP_URL": "https://staging-api.moleculesai.app",
"PROD_ALLOW_NON_PROD_CP_URL": "true",
}
)
assert plan["cp_url"] == "https://staging-api.moleculesai.app"
def test_build_plan_disable_flag_short_circuits_before_credentials():
plan = prod.build_plan(
{
"GITHUB_SHA": "abcdef1234567890",
"PROD_AUTO_DEPLOY_DISABLED": "true",
}
)
assert plan["enabled"] is False
assert plan["disabled_reason"] == "PROD_AUTO_DEPLOY_DISABLED=true"
def test_latest_status_for_context_uses_first_matching_status():
statuses = [
{"context": "CI / all-required (push)", "status": "pending"},
{"context": "CI / all-required (pull_request)", "status": "success"},
{"context": "CI / all-required (push)", "status": "success"},
]
latest = prod.latest_status_for_context(statuses, "CI / all-required (push)")
assert latest == {"context": "CI / all-required (push)", "status": "pending"}
def test_ci_context_state_handles_missing_and_gitea_status_key():
assert prod.ci_context_state([], "CI / all-required (push)") == "missing"
assert (
prod.ci_context_state(
[{"context": "CI / all-required (push)", "status": "success"}],
"CI / all-required (push)",
)
== "success"
)
assert (
prod.ci_context_state(
[{"context": "CI / all-required (push)", "state": "failure"}],
"CI / all-required (push)",
)
== "failure"
)
def test_context_is_satisfied_accepts_only_success():
assert prod.context_is_satisfied("success") is True
for state in ("failure", "error", "cancelled", "canceled", "skipped", "pending", "missing"):
assert prod.context_is_satisfied(state) is False
def test_context_is_terminal_failure_rejects_cancelled_and_skipped():
for state in ("failure", "error", "cancelled", "canceled", "skipped"):
assert prod.context_is_terminal_failure(state) is True
for state in ("pending", "missing", "success"):
assert prod.context_is_terminal_failure(state) is False

View File

@ -18,6 +18,13 @@ name: publish-workspace-server-image
# :staging-<sha> — per-commit digest, stable for canary verify
# :staging-latest — tracks most recent build on this branch
#
# Production auto-deploy:
# After both platform and tenant images are pushed, deploy-production waits
# for strict required push contexts on the same SHA to go green, then
# calls the production CP redeploy-fleet endpoint with target_tag=
# staging-<sha>. Set repo variable or secret PROD_AUTO_DEPLOY_DISABLED=true
# to stop production rollout while keeping image publishing enabled.
#
# ECR target: 153263036946.dkr.ecr.us-east-2.amazonaws.com/molecule-ai/*
# Required secrets: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AUTO_SYNC_TOKEN
#
@ -38,15 +45,10 @@ on:
- '.gitea/workflows/publish-workspace-server-image.yml'
workflow_dispatch:
# Serialize per-branch so two rapid main pushes don't race the same
# :staging-latest tag retag. Allow parallel runs as they produce
# different :staging-<sha> tags and last-write-wins on :staging-latest.
#
# cancel-in-progress: false → in-flight builds finish; the next push's
# build queues. This avoids a partially-pushed image.
concurrency:
group: publish-workspace-server-image-${{ github.ref }}
cancel-in-progress: false
# No `concurrency:` block here. Gitea 1.22.6 can cancel queued runs despite
# `cancel-in-progress: false`; that is not acceptable for a workflow with a
# production deploy job. Per-SHA image tags are immutable, and staging-latest is
# best-effort last-writer-wins metadata.
permissions:
contents: read
@ -175,3 +177,172 @@ jobs:
--tag "${TENANT_IMAGE_NAME}:${TAG_SHA}" \
--tag "${TENANT_IMAGE_NAME}:${TAG_LATEST}" \
--push .
deploy-production:
name: Production auto-deploy
needs: build-and-push
if: ${{ github.event_name == 'push' && github.ref == 'refs/heads/main' }}
runs-on: ubuntu-latest
timeout-minutes: 75
env:
CP_URL: ${{ vars.PROD_CP_URL || 'https://api.moleculesai.app' }}
CP_ADMIN_API_TOKEN: ${{ secrets.CP_ADMIN_API_TOKEN }}
GITEA_HOST: git.moleculesai.app
GITEA_TOKEN: ${{ secrets.PROD_AUTO_DEPLOY_CONTROL_TOKEN || secrets.AUTO_SYNC_TOKEN }}
PROD_AUTO_DEPLOY_DISABLED: ${{ vars.PROD_AUTO_DEPLOY_DISABLED || secrets.PROD_AUTO_DEPLOY_DISABLED || '' }}
PROD_AUTO_DEPLOY_CANARY_SLUG: ${{ vars.PROD_AUTO_DEPLOY_CANARY_SLUG || 'hongming' }}
PROD_AUTO_DEPLOY_SOAK_SECONDS: ${{ vars.PROD_AUTO_DEPLOY_SOAK_SECONDS || '60' }}
PROD_AUTO_DEPLOY_BATCH_SIZE: ${{ vars.PROD_AUTO_DEPLOY_BATCH_SIZE || '3' }}
PROD_AUTO_DEPLOY_DRY_RUN: ${{ vars.PROD_AUTO_DEPLOY_DRY_RUN || '' }}
PROD_ALLOW_NON_PROD_CP_URL: ${{ vars.PROD_ALLOW_NON_PROD_CP_URL || '' }}
steps:
- name: Checkout
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Build deploy plan
id: plan
run: |
set -euo pipefail
python3 .gitea/scripts/prod-auto-deploy.py plan > "$RUNNER_TEMP/prod-auto-deploy-plan.json"
jq . "$RUNNER_TEMP/prod-auto-deploy-plan.json"
enabled="$(jq -r '.enabled' "$RUNNER_TEMP/prod-auto-deploy-plan.json")"
echo "enabled=$enabled" >> "$GITHUB_OUTPUT"
if [ "$enabled" != "true" ]; then
reason="$(jq -r '.disabled_reason' "$RUNNER_TEMP/prod-auto-deploy-plan.json")"
echo "::notice::Production auto-deploy disabled: $reason"
{
echo "## Production auto-deploy skipped"
echo ""
echo "Reason: \`$reason\`"
} >> "$GITHUB_STEP_SUMMARY"
exit 0
fi
if [ -z "${CP_ADMIN_API_TOKEN:-}" ]; then
echo "::error::CP_ADMIN_API_TOKEN secret is required for production auto-deploy."
exit 1
fi
if [ -z "${GITEA_TOKEN:-}" ]; then
echo "::error::AUTO_SYNC_TOKEN secret is required so production deploy can wait for green CI."
exit 1
fi
- name: Self-test production deploy helper
if: ${{ steps.plan.outputs.enabled == 'true' }}
run: |
set -euo pipefail
python3 -m pip install --quiet 'pytest==9.0.2' 'PyYAML==6.0.2'
python3 -m pytest .gitea/scripts/tests/test_prod_auto_deploy.py -q
python3 .gitea/scripts/lint-workflow-yaml.py --workflow-dir .gitea/workflows
- name: Wait for green main CI on this SHA
if: ${{ steps.plan.outputs.enabled == 'true' }}
run: |
set -euo pipefail
python3 .gitea/scripts/prod-auto-deploy.py wait-ci
- name: Call production CP redeploy-fleet
if: ${{ steps.plan.outputs.enabled == 'true' }}
run: |
set -euo pipefail
python3 .gitea/scripts/prod-auto-deploy.py assert-enabled
PLAN="$RUNNER_TEMP/prod-auto-deploy-plan.json"
TARGET_TAG="$(jq -r '.target_tag' "$PLAN")"
BODY="$(jq -c '.body' "$PLAN")"
echo "POST $CP_URL/cp/admin/tenants/redeploy-fleet"
echo " target_tag: $TARGET_TAG"
echo " body: $BODY"
HTTP_RESPONSE="$RUNNER_TEMP/prod-redeploy-response.json"
HTTP_CODE_FILE="$RUNNER_TEMP/prod-redeploy-http-code.txt"
set +e
curl -sS -o "$HTTP_RESPONSE" -w '%{http_code}' \
-m 1200 \
-H "Authorization: Bearer $CP_ADMIN_API_TOKEN" \
-H "Content-Type: application/json" \
-X POST "$CP_URL/cp/admin/tenants/redeploy-fleet" \
-d "$BODY" > "$HTTP_CODE_FILE"
set -e
HTTP_CODE="$(cat "$HTTP_CODE_FILE" 2>/dev/null || echo "000")"
[ -z "$HTTP_CODE" ] && HTTP_CODE="000"
echo "HTTP $HTTP_CODE"
jq '{ok, result_count: (.results // [] | length)}' "$HTTP_RESPONSE" || true
{
echo "## Production auto-deploy"
echo ""
echo "**Commit:** \`${GITHUB_SHA:0:7}\`"
echo "**Target tag:** \`$TARGET_TAG\`"
echo "**HTTP:** $HTTP_CODE"
echo ""
echo "### Per-tenant result"
echo ""
echo "| Slug | Phase | SSM Status | Exit | Healthz | Error present |"
echo "|------|-------|------------|------|---------|---------------|"
jq -r '.results[]? | "| \(.slug) | \(.phase) | \(.ssm_status // "-") | \(.ssm_exit_code) | \(.healthz_ok) | \((.error // "") != "") |"' "$HTTP_RESPONSE" || true
} >> "$GITHUB_STEP_SUMMARY"
if [ "$HTTP_CODE" != "200" ]; then
echo "::error::redeploy-fleet returned HTTP $HTTP_CODE"
exit 1
fi
OK="$(jq -r '.ok' "$HTTP_RESPONSE")"
if [ "$OK" != "true" ]; then
echo "::error::redeploy-fleet reported ok=false; production rollout halted."
exit 1
fi
- name: Verify reachable tenants report this SHA
if: ${{ steps.plan.outputs.enabled == 'true' }}
env:
TENANT_DOMAIN: moleculesai.app
run: |
set -euo pipefail
RESP="$RUNNER_TEMP/prod-redeploy-response.json"
mapfile -t SLUGS < <(jq -r '.results[]? | .slug' "$RESP")
if [ ${#SLUGS[@]} -eq 0 ]; then
echo "::error::No tenants returned from redeploy-fleet; refusing to mark production deploy verified."
exit 1
fi
STALE_COUNT=0
UNREACHABLE_COUNT=0
UNHEALTHY_COUNT=0
for slug in "${SLUGS[@]}"; do
healthz_ok="$(jq -r --arg slug "$slug" '.results[]? | select(.slug == $slug) | .healthz_ok' "$RESP" | tail -1)"
if [ "$healthz_ok" != "true" ]; then
echo "::error::$slug did not report healthz_ok=true in redeploy-fleet response."
UNHEALTHY_COUNT=$((UNHEALTHY_COUNT + 1))
continue
fi
url="https://${slug}.${TENANT_DOMAIN}/buildinfo"
body="$(curl -sS --max-time 30 --retry 3 --retry-delay 5 --retry-connrefused "$url" || true)"
actual="$(echo "$body" | jq -r '.git_sha // ""' 2>/dev/null || echo "")"
if [ -z "$actual" ]; then
echo "::error::$slug did not return /buildinfo after deploy."
UNREACHABLE_COUNT=$((UNREACHABLE_COUNT + 1))
continue
fi
if [ "$actual" != "$GITHUB_SHA" ]; then
echo "::error::$slug is stale: actual=${actual:0:7}, expected=${GITHUB_SHA:0:7}"
STALE_COUNT=$((STALE_COUNT + 1))
else
echo "$slug: ${actual:0:7}"
fi
done
{
echo ""
echo "### Buildinfo verification"
echo ""
echo "Expected SHA: \`${GITHUB_SHA:0:7}\`"
echo "Verified tenants: ${#SLUGS[@]}"
echo "Stale tenants: $STALE_COUNT"
echo "Unhealthy tenants: $UNHEALTHY_COUNT"
echo "Unreachable tenants: $UNREACHABLE_COUNT"
} >> "$GITHUB_STEP_SUMMARY"
if [ "$STALE_COUNT" -gt 0 ] || [ "$UNHEALTHY_COUNT" -gt 0 ] || [ "$UNREACHABLE_COUNT" -gt 0 ]; then
exit 1
fi

View File

@ -1,4 +1,4 @@
name: redeploy-tenants-on-main
name: manual-redeploy-tenants-on-main
# Ported from .github/workflows/redeploy-tenants-on-main.yml on 2026-05-11 per RFC
# internal#219 §1 sweep. Differences from the GitHub version:
@ -9,14 +9,21 @@ name: redeploy-tenants-on-main
# - Workflow-level env.GITHUB_SERVER_URL pinned per
# feedback_act_runner_github_server_url.
# - `continue-on-error: true` on each job (RFC §1 contract).
# - ~~**Gitea workflow_run trigger limitation**~~ FIXED: replaced with
# push+paths filter per this PR. Gitea 1.22.6 does not support
# `workflow_run` (task #81). The push trigger fires on every
# commit to publish-workspace-server-image.yml which is the
# same signal (only successful runs commit to main).
# - Gitea 1.22.6 does not support workflow_run (task #81). This Gitea
# fallback is manual-only; automatic production deploy is attached to
# publish-workspace-server-image.yml after image push succeeds.
#
# Auto-refresh prod tenant EC2s after every main merge.
# Manual production tenant redeploy fallback.
#
# Primary automatic production deployment now lives in
# publish-workspace-server-image.yml:
# build images -> wait for `CI / all-required (push)` green on the same SHA
# -> call production redeploy-fleet.
#
# This workflow remains as an operator fallback. By default it reruns current
# main; set repo variable PROD_MANUAL_REDEPLOY_TARGET_TAG to a known-good
# `staging-<sha>` tag for rollback.
#
# Why this workflow exists: publish-workspace-server-image builds and
# pushes a new platform-tenant :<sha> to ECR on every merge to main,
@ -34,60 +41,26 @@ name: redeploy-tenants-on-main
# Gitea suspension migration. The staging-verify.yml promote step now
# uses the same redeploy-fleet endpoint (fixes the silent-GHCR gap).
#
# Runtime ordering:
# 1. publish-workspace-server-image completes → new :staging-<sha> in ECR.
# 2. This workflow fires via workflow_run, calls redeploy-fleet with
# target_tag=staging-<sha>. No CDN propagation wait needed —
# ECR image manifest is consistent immediately after push.
# 3. Calls redeploy-fleet with canary_slug (if set) and a soak
# period. Canary proves the image boots; batches follow.
# 4. Any failure aborts the rollout and leaves older tenants on the
# prior image — safer default than half-and-half state.
#
# Rollback path: re-run this workflow with a specific SHA pinned via
# the workflow_dispatch input. That calls redeploy-fleet with
# target_tag=<sha>, re-pulling the older image on every tenant.
# Any failure aborts the rollout and leaves older tenants on the prior image.
on:
push:
branches: [main]
paths:
- '.gitea/workflows/publish-workspace-server-image.yml'
workflow_dispatch:
permissions:
contents: read
# No write scopes needed — the workflow hits an external CP endpoint,
# not the GitHub API.
# Serialize redeploys so two rapid main pushes' redeploys don't overlap
# and cause confusing per-tenant SSM state. Without this, GitHub's
# implicit workflow_run queueing would *probably* serialize them, but
# the explicit block makes the invariant defensible. Mirrors the
# concurrency block on redeploy-tenants-on-staging.yml for shape parity.
#
# cancel-in-progress: false → aborting a half-rolled-out fleet would
# leave tenants stuck on whatever image they happened to be on when
# cancelled. Better to finish the in-flight rollout before starting
# the next one.
concurrency:
group: redeploy-tenants-on-main
cancel-in-progress: false
# No `concurrency:` block here. Gitea 1.22.6 can cancel queued runs despite
# `cancel-in-progress: false`; operators should not dispatch overlapping manual
# production redeploys.
env:
GITHUB_SERVER_URL: https://git.moleculesai.app
jobs:
redeploy:
# Skip the auto-trigger if publish-workspace-server-image didn't
# actually succeed. workflow_run fires on any completion state; we
# don't want to redeploy against a half-built image.
# NOTE (Gitea port): workflow_dispatch trigger dropped; only the
# workflow_run path remains.
if: ${{ github.event.workflow_run.conclusion == 'success' }}
runs-on: ubuntu-latest
# Phase 3 (RFC #219 §1): surface broken workflows without blocking.
# mc#774: pre-existing continue-on-error mask; root-fix and remove, do not renew silently.
continue-on-error: true
continue-on-error: false
timeout-minutes: 25
steps:
- name: Note on ECR propagation
@ -98,30 +71,20 @@ jobs:
- name: Compute target tag
id: tag
# Resolution order:
# 1. Operator-supplied input (workflow_dispatch with explicit
# tag) → used verbatim. Lets ops pin `latest` for emergency
# rollback to last canary-verified digest, or pin a specific
# `staging-<sha>` to roll back to a known-good build.
# 2. Default → `staging-<short_head_sha>`. The just-published
# digest. Bypasses the `:latest` retag path that's currently
# dead (staging-verify soft-skips without canary fleet, so
# the only thing retagging `:latest` today is the manual
# promote-latest.yml — last run 2026-04-28). Auto-trigger
# from workflow_run uses workflow_run.head_sha; manual
# dispatch with no input falls through to github.sha.
# Gitea 1.22.6 does not support workflow_dispatch inputs reliably.
# Use repo variable PROD_MANUAL_REDEPLOY_TARGET_TAG for rollback.
env:
INPUT_TAG: ${{ inputs.target_tag }}
HEAD_SHA: ${{ github.event.workflow_run.head_sha || github.sha }}
HEAD_SHA: ${{ github.sha }}
MANUAL_TARGET_TAG: ${{ vars.PROD_MANUAL_REDEPLOY_TARGET_TAG || '' }}
run: |
set -euo pipefail
if [ -n "${INPUT_TAG:-}" ]; then
echo "target_tag=$INPUT_TAG" >> "$GITHUB_OUTPUT"
echo "Using operator-pinned tag: $INPUT_TAG"
if [ -n "${MANUAL_TARGET_TAG:-}" ]; then
echo "target_tag=$MANUAL_TARGET_TAG" >> "$GITHUB_OUTPUT"
echo "Using operator-pinned manual target tag: $MANUAL_TARGET_TAG"
else
SHORT="${HEAD_SHA:0:7}"
echo "target_tag=staging-$SHORT" >> "$GITHUB_OUTPUT"
echo "Using auto tag: staging-$SHORT (head_sha=$HEAD_SHA)"
echo "Using manual fallback tag: staging-$SHORT (head_sha=$HEAD_SHA)"
fi
- name: Call CP redeploy-fleet
@ -130,13 +93,13 @@ jobs:
# CP_ADMIN_API_TOKEN env. Stored in Railway, mirrored to this
# repo's secrets for CI.
env:
CP_URL: ${{ vars.CP_URL || 'https://api.moleculesai.app' }}
CP_URL: ${{ vars.PROD_CP_URL || 'https://api.moleculesai.app' }}
CP_ADMIN_API_TOKEN: ${{ secrets.CP_ADMIN_API_TOKEN }}
TARGET_TAG: ${{ steps.tag.outputs.target_tag }}
CANARY_SLUG: ${{ inputs.canary_slug || 'hongming' }}
SOAK_SECONDS: ${{ inputs.soak_seconds || '60' }}
BATCH_SIZE: ${{ inputs.batch_size || '3' }}
DRY_RUN: ${{ inputs.dry_run || false }}
CANARY_SLUG: ${{ vars.PROD_AUTO_DEPLOY_CANARY_SLUG || 'hongming' }}
SOAK_SECONDS: ${{ vars.PROD_AUTO_DEPLOY_SOAK_SECONDS || '60' }}
BATCH_SIZE: ${{ vars.PROD_AUTO_DEPLOY_BATCH_SIZE || '3' }}
DRY_RUN: ${{ vars.PROD_AUTO_DEPLOY_DRY_RUN || false }}
run: |
set -euo pipefail
@ -189,7 +152,7 @@ jobs:
[ -z "$HTTP_CODE" ] && HTTP_CODE="000"
echo "HTTP $HTTP_CODE"
cat "$HTTP_RESPONSE" | jq . || cat "$HTTP_RESPONSE"
jq '{ok, result_count: (.results // [] | length)}' "$HTTP_RESPONSE" || true
# Pretty-print per-tenant results in the job summary so
# ops can see which tenants were redeployed without drilling
@ -205,9 +168,9 @@ jobs:
echo ""
echo "### Per-tenant result"
echo ""
echo '| Slug | Phase | SSM Status | Exit | Healthz | Error |'
echo '|------|-------|------------|------|---------|-------|'
jq -r '.results[]? | "| \(.slug) | \(.phase) | \(.ssm_status // "-") | \(.ssm_exit_code) | \(.healthz_ok) | \(.error // "-") |"' "$HTTP_RESPONSE" || true
echo '| Slug | Phase | SSM Status | Exit | Healthz | Error present |'
echo '|------|-------|------------|------|---------|---------------|'
jq -r '.results[]? | "| \(.slug) | \(.phase) | \(.ssm_status // "-") | \(.ssm_exit_code) | \(.healthz_ok) | \((.error // "") != "") |"' "$HTTP_RESPONSE" || true
} >> "$GITHUB_STEP_SUMMARY"
if [ "$HTTP_CODE" != "200" ]; then
@ -246,13 +209,10 @@ jobs:
# fail the workflow, which is what `ok=true` should have
# guaranteed all along.
#
# When the redeploy was triggered by workflow_dispatch with a
# specific tag (target_tag != "latest"), the expected SHA may
# not equal ${{ github.sha }} — in that case we resolve via
# GHCR's manifest. For workflow_run (default :latest) the
# workflow_run.head_sha is the SHA that just published.
# Manual Gitea fallback redeploys current main's staging-<sha> tag, so
# the expected SHA is github.sha.
env:
EXPECTED_SHA: ${{ github.event.workflow_run.head_sha || github.sha }}
EXPECTED_SHA: ${{ github.sha }}
TARGET_TAG: ${{ steps.tag.outputs.target_tag }}
# Tenant subdomain template — slugs from the response are
# appended. Production CP issues `<slug>.moleculesai.app`;

View File

@ -0,0 +1,64 @@
# Production Auto-Deploy
`molecule-core` deploys production tenant code automatically from Gitea Actions.
This runbook is an implementation-specific companion to `runbooks/sop-production-cicd.md`.
## Default Flow
On a push to `main` that touches deployable code, `.gitea/workflows/publish-workspace-server-image.yml`:
1. Builds and pushes platform and tenant ECR images tagged `staging-<sha>` and `staging-latest`.
2. Self-tests the production deploy helper and workflow-YAML linter.
3. Waits for strict required push contexts on the same commit to become `success`.
4. Calls production control-plane `POST /cp/admin/tenants/redeploy-fleet` with `target_tag=staging-<sha>`.
5. Verifies every redeploy result is healthy and every tenant returns the same Git SHA from `/buildinfo`.
The deploy workflow intentionally does not use Gitea `concurrency` because Gitea 1.22.6 can cancel queued runs even when `cancel-in-progress: false`.
## Kill Switch
Set either repository variable or secret:
```text
PROD_AUTO_DEPLOY_DISABLED=true
```
The image publish still runs, but the production redeploy step exits successfully without touching tenants.
Immediately before the production POST, the workflow re-checks the live Gitea repo variable when `PROD_AUTO_DEPLOY_CONTROL_TOKEN` can read Actions variables. If that token is not configured, the job-start value is still honored.
## Tunables
Repository variables:
```text
PROD_CP_URL=https://api.moleculesai.app
PROD_AUTO_DEPLOY_CANARY_SLUG=hongming
PROD_AUTO_DEPLOY_SOAK_SECONDS=60
PROD_AUTO_DEPLOY_BATCH_SIZE=3
PROD_AUTO_DEPLOY_DRY_RUN=false
PROD_MANUAL_REDEPLOY_TARGET_TAG=staging-<known-good-sha>
```
Secrets required:
```text
CP_ADMIN_API_TOKEN
AUTO_SYNC_TOKEN
PROD_AUTO_DEPLOY_CONTROL_TOKEN
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
```
`AUTO_SYNC_TOKEN` is only used to read Gitea commit statuses while waiting for required push contexts.
`PROD_AUTO_DEPLOY_CONTROL_TOKEN` is optional but recommended so the pre-POST kill-switch check can read the live `PROD_AUTO_DEPLOY_DISABLED` Actions variable.
## Manual Fallback
Use `.gitea/workflows/redeploy-tenants-on-main.yml` when the automatic path needs to be rerun or rolled back. Gitea 1.22.6 does not support reliable `workflow_dispatch` inputs, so rollback uses a repo variable:
1. Set `PROD_MANUAL_REDEPLOY_TARGET_TAG=staging-<known-good-sha>`.
2. Dispatch `manual-redeploy-tenants-on-main`.
3. Clear `PROD_MANUAL_REDEPLOY_TARGET_TAG` after the rollback finishes.
With no variable set, the fallback redeploys `staging-<current-main-sha>`.

View File

@ -0,0 +1,76 @@
# SOP: Production CI/CD Changes
Production CI/CD changes are higher risk than ordinary CI edits. They can publish images, deploy tenants, promote tags, mutate branch protection, or change merge behavior. This SOP separates rules that must be enforced by code from rules that require human judgment.
## Programmatic Gates
The workflow YAML linter is the first line of enforcement:
```bash
python3 .gitea/scripts/lint-workflow-yaml.py --workflow-dir .gitea/workflows
```
It must reject:
- Gitea-hostile syntax such as `workflow_dispatch.inputs`, `workflow_run`, workflow name collisions, slash-containing workflow names, and unsupported cross-repo action references.
- Production deploy workflows that rely on `concurrency.cancel-in-progress: false` for serialization.
- Production deploy workflows that print raw control-plane responses or raw `.error` fields into CI logs.
- Production redeploy workflows with no kill switch or rollback/pin control.
Production deploy helpers must also unit-test:
- Disable-flag parsing.
- Required status context selection.
- Terminal status handling for `failure`, `error`, `cancelled`, `canceled`, and `skipped`.
- Production control-plane URL guards.
- Rollback target/pin handling when applicable.
## Required PR Evidence
Every production CI/CD PR must include concrete answers for:
- Root cause: what production failure mode or process gap is being closed.
- Deploy gate: which exact contexts must be green before production side effects.
- Kill switch: how to stop deployment without reverting the PR.
- Verification: how production state is proven after deployment.
- Logging: proof that CI logs do not contain raw production runtime, SSM, or secret-adjacent output.
- Rollback: the exact command, variable, or workflow to return to a known-good tag/digest.
## Human Review
Production CI/CD PRs need non-author review across these roles:
- DevOps: Gitea Actions semantics, branch protection, merge queue, and runner behavior.
- SRE: rollout order, tenant health checks, observability, and partial-deploy recovery.
- Security: secrets, token scopes, log redaction, and production endpoint targeting.
Critical or Required review findings must be closed with one of:
- A code change plus verification.
- An evidence-backed rejection.
- A follow-up issue only if the finding is explicitly not merge-blocking.
Acknowledgement alone is not closure.
## Production Defaults
Production deploys should fail closed:
- Missing tenant result: fail.
- Tenant unhealthy: fail.
- `/buildinfo` unreachable: fail.
- SHA mismatch: fail.
- Required status cancelled/skipped/missing past timeout: fail.
Staging may tolerate warnings during rollout development; production should not.
## Gitea 1.22.6 Constraints
Do not design production CI/CD around unsupported or unreliable features:
- No `workflow_run`.
- No reliable `workflow_dispatch.inputs`.
- Do not assume `concurrency.cancel-in-progress: false` serializes queued runs.
- Do not rely on a masked aggregate status as the only production deploy gate.
If these constraints change after a Gitea upgrade, update this SOP and the workflow linter in the same PR.

View File

@ -411,3 +411,134 @@ def test_rule1_catches_2026_05_11_publish_runtime_regression(tmp_path):
f"(memory: feedback_gitea_workflow_dispatch_inputs_unsupported)."
f"\nstdout={r.stdout}"
)
# ---------------------------------------------------------------------------
# Rule 7 — production deploys cannot rely on broken Gitea concurrency
# ---------------------------------------------------------------------------
PROD_CONCURRENCY_BAD = """
name: prod-concurrency-bad
on: [push]
jobs:
deploy:
runs-on: ubuntu-latest
concurrency:
group: production-auto-deploy
cancel-in-progress: false
steps:
- run: curl https://api.moleculesai.app/cp/admin/tenants/redeploy-fleet
"""
def test_rule7_prod_deploy_concurrency_detects_violation(tmp_path):
_write(tmp_path, "bad.yml", PROD_CONCURRENCY_BAD)
r = _run_lint(tmp_path)
assert r.returncode == 1
assert "production deploy" in r.stdout.lower()
assert "concurrency" in r.stdout.lower()
# ---------------------------------------------------------------------------
# Rule 8 — production deploys must not dump raw CP responses/errors
# ---------------------------------------------------------------------------
PROD_RAW_LOG_BAD = """
name: prod-raw-log-bad
on: [push]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- run: |
curl https://api.moleculesai.app/cp/admin/tenants/redeploy-fleet -o "$HTTP_RESPONSE"
jq . "$HTTP_RESPONSE"
jq -r '.results[]? | .error' "$HTTP_RESPONSE"
"""
PROD_REDACTED_LOG_OK = """
name: prod-redacted-log-ok
on: [push]
jobs:
deploy:
runs-on: ubuntu-latest
env:
PROD_AUTO_DEPLOY_DISABLED: ${{ vars.PROD_AUTO_DEPLOY_DISABLED || '' }}
steps:
- run: |
curl https://api.moleculesai.app/cp/admin/tenants/redeploy-fleet -o "$HTTP_RESPONSE"
jq '{ok, result_count: (.results // [] | length)}' "$HTTP_RESPONSE"
jq -r '.results[]? | ((.error // "") != "")' "$HTTP_RESPONSE"
"""
def test_rule8_prod_deploy_raw_log_detects_violation(tmp_path):
_write(tmp_path, "bad.yml", PROD_RAW_LOG_BAD)
r = _run_lint(tmp_path)
assert r.returncode == 1
assert "raw production cp response" in r.stdout.lower()
def test_rule8_prod_deploy_allows_redacted_summary(tmp_path):
_write(tmp_path, "ok.yml", PROD_REDACTED_LOG_OK)
r = _run_lint(tmp_path)
assert r.returncode == 0, f"stdout={r.stdout}\nstderr={r.stderr}"
# ---------------------------------------------------------------------------
# Rule 9 — production deploys require an operational control
# ---------------------------------------------------------------------------
PROD_NO_CONTROL_BAD = """
name: prod-no-control-bad
on: [push]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- run: curl https://api.moleculesai.app/cp/admin/tenants/redeploy-fleet
"""
PROD_KILL_SWITCH_OK = """
name: prod-kill-switch-ok
on: [push]
jobs:
deploy:
runs-on: ubuntu-latest
env:
PROD_AUTO_DEPLOY_DISABLED: ${{ vars.PROD_AUTO_DEPLOY_DISABLED || '' }}
steps:
- run: curl https://api.moleculesai.app/cp/admin/tenants/redeploy-fleet
"""
PROD_ROLLBACK_OK = """
name: prod-rollback-ok
on:
workflow_dispatch:
jobs:
deploy:
runs-on: ubuntu-latest
env:
PROD_MANUAL_REDEPLOY_TARGET_TAG: ${{ vars.PROD_MANUAL_REDEPLOY_TARGET_TAG || '' }}
steps:
- run: curl https://api.moleculesai.app/cp/admin/tenants/redeploy-fleet
"""
def test_rule9_prod_deploy_requires_kill_switch_or_rollback(tmp_path):
_write(tmp_path, "bad.yml", PROD_NO_CONTROL_BAD)
r = _run_lint(tmp_path)
assert r.returncode == 1
assert "kill switch" in r.stdout.lower()
def test_rule9_prod_auto_deploy_allows_kill_switch(tmp_path):
_write(tmp_path, "ok.yml", PROD_KILL_SWITCH_OK)
r = _run_lint(tmp_path)
assert r.returncode == 0, f"stdout={r.stdout}\nstderr={r.stderr}"
def test_rule9_prod_manual_deploy_allows_rollback_control(tmp_path):
_write(tmp_path, "ok.yml", PROD_ROLLBACK_OK)
r = _run_lint(tmp_path)
assert r.returncode == 0, f"stdout={r.stdout}\nstderr={r.stderr}"