§SOP-6 force-merge detector, hosted as a Gitea Actions composite
action so it can be vendored into every org repo via a single
`uses:` line instead of copy-pasting the bash. Source of truth
for the audit script logic.
Why composite vs reusable workflow: Gitea 1.22.6 doesn't support
cross-repo `uses: org/repo/.gitea/workflows/X.yml@ref`. Cross-repo
reusable workflows landed in go-gitea/gitea#32562 (1.26.0, Oct 2025)
and have not been backported. Composite actions resolve via the
actions-fetch path which works cross-repo against a public callee.
Re-evaluate when operator host runs Gitea ≥ 1.26.
Consumer workflow shape:
on:
pull_request_target:
types: [closed]
jobs:
audit:
if: github.event.pull_request.merged == true
runs-on: ubuntu-latest
steps:
- uses: molecule-ai/molecule-ci/.gitea/actions/audit-force-merge@main
with:
gitea-token: ${{ secrets.SOP_TIER_CHECK_TOKEN }}
repo: ${{ github.repository }}
pr-number: ${{ github.event.pull_request.number }}
required-checks: |
sop-tier-check / tier-check (pull_request)
No actions/checkout step needed in the consumer — the audit script
does pure API calls, never reads working tree. Removing checkout is
also a small security win (PR head code never loaded).
Verified end-to-end on internal#123 + molecule-core#150 with the
inline copies (which this PR will replace via consumer-side stub
PRs once merged). Tier: low.
molecule-ai-org-template-molecule-dev's CI has been red since the
"pin: dev-department v1.0.0" merge. Symptom:
::error::Workspace at <unnamed>: missing 'name'
::error::Workspace at <unnamed>: missing 'name'
Root cause: org.yaml uses `!external` for the dev-department subtree
fetch (introduced internal#77 / molecule-core#105). The PermissiveLoader
formerly handed every unknown tag to a single multi-constructor that
flattens the parsed value to a plain dict. The validator's
validate_workspace() then saw a dict with no `name` key and tripped
the "missing name" error — but the dict was a `!external` directive,
not a malformed workspace.
The fix wraps both supported tags in distinct sentinel types:
- !include → IncludeRef (str subclass)
- !external → ExternalRef (dict subclass)
validate_workspace() and count_ws() now skip these instead of treating
them as workspace shape. Real workspace dicts (with names) still get
the full structural check. Unknown tags fall through to the
multi-constructor exactly as before, preserving back-compat.
Verified on the live failing org.yaml:
✓ org.yaml valid: Molecule AI Dev Team (0 direct workspaces;
external refs not counted)
And on a synthetic case with one real bug (missing-name workspace
nested under children):
::error::Workspace at <unnamed>: missing 'name'
::error::Workspace at <unnamed>/<unnamed>: missing 'name'
exit 1
So the validator still catches real shape bugs; it just doesn't
false-positive on the new !external pattern.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
molecule-ci#2 attempted token: '' to force anonymous on the cross-repo
checkout. CI on plugin-molecule-careful-bash@663bf72 (post-merge of #2)
revealed actions/checkout@v4 errors with:
::error::Input required and not supplied: token
Even though token's input definition is required:false with a default,
the action's runtime auth-helper calls getInput('token', {required: true})
internally — empty string fails that check.
Fix: replace the cross-repo actions/checkout with a direct git clone
shell step. molecule-ci is public; anonymous git clone has neither the
auth-trips-Gitea-404 problem (#2's target) nor the empty-token-input-
required problem (#2's actual failure shape).
3 files updated, 4 sites total:
* validate-plugin.yml (1 site)
* validate-workspace-template.yml (2 sites)
* validate-org-template.yml (1 site)
Refs: internal#46. Closes the third root cause uncovered by the
verification cycle on plugin-molecule-careful-bash.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
After lowercasing the slug (molecule-ci#1) and flipping molecule-ci public,
plugin/template/org-template CI still failed at the SECOND actions/checkout
step (the one that fetches molecule-ci itself for canonical validator scripts).
Failure mode in act_runner log:
Run actions/checkout@v4
repository: molecule-ai/molecule-ci
path: .molecule-ci-canonical
Syncing repository: molecule-ai/molecule-ci
[git config http.https://git.moleculesai.app/.extraheader AUTHORIZATION: basic ***]
::error::The target couldn't be found.
❌ Failure - Main actions/checkout@v4
Root cause: actions/checkout@v4 sends `Authorization: basic <github.token>` —
the per-job Gitea-issued token, scoped to the calling plugin/template repo
only. On Gitea, an authenticated request that lacks repo-permission 404s
instead of falling back to anonymous-public-read (a Gitea-vs-GitHub
behaviour difference). Anonymous git clone of molecule-ci succeeds; the auth
header is what trips the 404.
Fix: pass `token: ''` to force anonymous fetch on the cross-repo checkouts.
molecule-ci is public; no auth is needed for read.
3 sites updated:
* validate-plugin.yml (1 site)
* validate-workspace-template.yml (2 sites — both jobs in the file)
* validate-org-template.yml (1 site)
Verified by: re-triggering plugin-molecule-careful-bash#2 will be GREEN
end-to-end after this lands. The 33 downstream lowercase-slug PRs are NOT
mass-merged until that verification.
Refs: internal#46
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Gitea is case-sensitive on owner slugs; canonical is lowercase
`molecule-ai/...`. Mixed-case `Molecule-AI/...` refs fail-at-0s
when the runner tries to resolve the cross-repo workflow / checkout.
Same fix as molecule-controlplane#12. Mechanical case-correction;
no behavior change beyond making CI resolve again.
Refs: internal#46
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Pairs with molecule-core PR #2473 (run_executor_smoke now consults
runtime_wedge.is_wedged() at the end of every result path).
10s smoke timeout was shorter than claude-agent-sdk's 60s
initialize() handshake — when a malformed CLI argv made the SDK
spin on init (PR #25 in claude-code template), the outer wait_for
fired first, run_executor_smoke saw "execution proceeding past
imports → timeout → PASS" and shipped the broken image to GHCR.
Bumping to 90s lets the SDK time itself out, the executor's wedge
catch arm runs, and runtime_wedge.mark_wedged() flips the flag
that smoke_mode now reads. Outer `timeout` bumped to 120s — the
runner-level safety net stays slightly longer than the inner cap
so a smoke_mode regression that doesn't terminate surfaces as exit
124 with a clear error, not just exit 1.
Step comment names this calibration explicitly so a future
contributor doesn't shrink it back without injecting a wedge in
the smoke_mode unit tests first. Error message references
runtime_wedge so a failure-mode reader knows where to look.
The workflow was refactored from one `validate` job (display name
"Template validation") into matrix-named validate-static +
validate-runtime jobs ("(static)" / "(runtime)" suffixes) for
fork-PR security. The new check names — `validate / Template
validation (static)` and `validate / Template validation
(runtime)` — never match the original `validate / Template
validation` that template-repo branch protection requires. Result:
auto-merge silently hangs in BLOCKED forever on every template
repo because the required check never reports.
Add a third aggregator job `template-validation` (display name
"Template validation") that depends on both real jobs and emits
the original check name. `if: always()` so it reports out even
when validate-static fails — without that GitHub marks the
aggregator SKIPPED and branch protection still blocks because the
required check never reaches a final state.
Treats `skipped` as pass for validate-runtime so fork PRs (where
runtime is intentionally skipped on the security gate) don't
become un-mergeable.
Caught while shipping the boot-smoke fixes for openclaw#11 and
hermes#29 — both PRs sat BLOCKED with all real checks green.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Third hot-fix for #2275 Phase 2 — claude-code re-run #3 showed the
boot smoke ITSELF passing (`[smoke-mode] PASS: timed out past import-
tree (imports healthy)`), but the workflow step still exited 1 because
the post-smoke cleanup `rm -rf "${SMOKE_CONFIG_DIR}"` failed with
`Permission denied`.
Root cause: the image entrypoint (entrypoint.sh) does
`chown -R agent:agent /configs` before exec'ing molecule-runtime as
uid 1000. Because /configs is a bind-mount of the host's mktemp dir,
the chown propagates to the host — the runner user (the GHA `runner`
account, NOT root) can no longer delete the files inside it. With
`set -e` in effect, that rm exit propagates and we report failure
even though the gate itself passed.
Fix: best-effort rm with sudo fallback and final `|| true`. The
runner is ephemeral; /tmp gets cleaned automatically at job teardown.
Verified against run 25202859503 which showed every other step green
+ the smoke itself passing — only this rm was the blocker.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Second hot-fix for #2275 Phase 2 — boot smoke kept failing with
`ModuleNotFoundError: No module named 'adapter'` even after the
permissions fix landed.
Root cause: the production platform's provisioner sets PYTHONPATH=/app
on every workspace container (provisioner.go:563) so molecule-runtime —
a pip console_scripts entry point whose sys.path[0] is /usr/local/bin,
NOT /app — can resolve `importlib.import_module('adapter')`. The
existing static import smoke didn't hit this because `python3 -c "import
$mod"` adds cwd to sys.path; only the entry-point invocation needs
PYTHONPATH.
Mirrors prod by passing `-e PYTHONPATH=/app` in the docker run.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Hot-fix for #2275 Phase 2 — the boot smoke step in v1@3c8f8fe failed
on every template publish with `PermissionError: [Errno 13] Permission
denied: '/configs/config.yaml'` because `mktemp -d` creates the dir
with mode 700 and `chmod -R go+r` adds 'r' to files but doesn't add
'x' to directories. Inside the image the entrypoint drops priv to
uid 1000 (agent), which then cannot traverse /configs to even reach
config.yaml — main.py exits before any executor code runs.
Two changes:
1. `chmod -R a+rX` (capital X) adds 'x' to directories AND already-
executable files, so the temp dir becomes traversable for agent
while config.yaml stays a regular world-readable file.
2. Drop `:ro` on the mount so the entrypoint's `chown -R agent
/configs` succeeds. The container is ephemeral; modifications to
the host mktemp dir don't matter and the dir gets nuked right
after the smoke run.
Reproduced + diagnosed against claude-code publish run 25202651546
which failed within a few seconds on Path('/configs/config.yaml').exists()
in molecule_runtime/config.py:298.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds a step between the existing import smoke and the GHCR push that
boots the just-built image with MOLECULE_SMOKE_MODE=1, which routes
molecule-runtime through the new smoke_mode.run_executor_smoke() —
invokes executor.execute(stub_ctx, stub_queue) once with a 10s timeout.
Healthy import tree → execution proceeds far enough to hit a network
boundary and times out (exit 0). Broken lazy import inside an
`async def execute(...)` body → ImportError/ModuleNotFoundError
(exit 1). The 2026-04-2x v0→v1 a2a-sdk migration shipped 5 such
regressions in templates that the existing static import smoke missed.
Skip path: when the installed runtime predates 0.1.60 (pre-smoke_mode),
the step prints a warning + exits 0. Templates pinned to older runtimes
keep publishing without this gate flipping red; cascade-triggered
builds (which forward the just-published version as RUNTIME_VERSION)
get the gate automatically.
Belt-and-suspenders `timeout 60` wrapper so smoke_mode itself can't
wedge the runner past one minute per template.
After merge, bump v1 tag to point at the new main SHA (caller repos
pin to @v1; the change has no effect until the moving tag advances).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Splits the reusable validator into two jobs to keep external fork
PRs from running arbitrary template code on the runner.
Background
The reusable workflow runs three primitives that execute
template-supplied code:
- pip install -r requirements.txt (setup.py + post-install hooks)
- importlib.exec_module(adapter) (top-level Python in adapter.py)
- docker build (RUN steps in Dockerfile)
Token scope is already minimal (contents: read), GitHub forced
fork-PR tokens read-only in 2021, and the workflow_call interface
doesn't accept secrets. So the actual exploit surface is "what can
a malicious actor do with arbitrary code execution on a GitHub-
hosted runner that has no useful credentials?" — answer: crypto-
mine, DNS-exfiltrate runner metadata, attempt lateral movement
within the runner's network. Annoying, not catastrophic, but a
real attack surface that this PR closes.
The fix
Two-job split:
validate-static Always runs, including external fork PRs.
File-content checks (secret scan, YAML parse,
AST inspection of adapter.py without import),
pip install only the validator's pyyaml dep
(not the template's requirements.txt). NO
third-party code execution.
validate-runtime Skipped when github.event.pull_request.head.
repo.fork == true. pip install requirements.txt
+ adapter import + docker build. Internal PRs
and push events to internal branches still get
the full coverage.
The validator script gains a --static-only flag that skips
check_adapter_runtime_load() (the function that calls
exec_module). The validate-static job uses it; validate-runtime
uses the existing full mode.
Trade-off
External contributors get static feedback only on their PR. If
their template metadata passes static checks but breaks runtime
loading, branch protection on staging/main blocks the merge once
runtime validation runs (post-merge or after an internal
contributor reposts). Fewer false-positive CI failures for honest
external contributors; same coverage at the merge-protected
boundary.
What this does NOT close
- Maintainer-approved external PRs that consciously execute
third-party code. The maintainer must approve a workflow run
via GitHub's first-time-contributor gate; that's a human
decision, not a workflow-level gate.
- requirements.txt that pulls a malicious transitive dep from
PyPI even on internal PRs. Mitigated by branch-protection +
human review of PRs that touch requirements.txt.
Closes task #135.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The v1 tag exists in this repo but README + docs still showed
@main in the caller-pattern examples. Followers of the docs were
copy-pasting unstable @main pins. Fix: update all 6 example
references to @v1 across:
- README.md (4 examples)
- docs/template-contract.md (1 example)
- .github/workflows/auto-promote-staging-pr.yml header comment
(1 example, just shipped in PR #25)
Operational note: v1 is meant to track the latest stable patch
within the v1 major. Cutting a new v1.X.Y or breaking-change v2
requires moving the v1 tag forward — same convention as
actions/checkout@v4 etc.
Doesn't migrate any consumer repo. Consumer migration from @main
to @v1 is a per-repo follow-up; this PR ships the docs that
guide that migration.
Closes task #133.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Moves the canonical PR-based staging→main auto-promote flow into a
reusable workflow that protected-branch repos can call instead of
duplicating ~240 lines of YAML each.
Why two reusable variants in this repo:
auto-promote-staging.yml (existing — ff-only, direct push)
For repos WITHOUT required-status-checks branch protection.
Already used for molecule-ci, molecule-app, molecule-docs,
molecule-monorepo. Cannot satisfy protected-branch rules
requiring status checks "set by expected GitHub apps".
auto-promote-staging-pr.yml (THIS PR — PR-based)
For repos WITH required-status-checks. Opens (or reuses) a
staging→main PR, enables auto-merge, lets the merge queue land
it. Required path for molecule-core + molecule-controlplane
(per the 2026-04-28 incident where direct ff-only push was
failing GH006 on protected refs).
Inputs:
gates — CSV of workflow filenames to require green
target-branch — promote target (default: main)
source-branch — promote source (default: staging)
enabled-var — repo variable name gating rollout
(default: AUTO_PROMOTE_ENABLED)
merge-method — merge|squash|rebase (default: merge — matches
user preference for merge commits over squash)
force — pass through caller's workflow_dispatch.force input
Caller pattern (kept minimal — see header comment in the workflow):
on:
workflow_run:
workflows: [CI, ...]
types: [completed]
workflow_dispatch:
inputs:
force: ...
permissions:
contents: write
pull-requests: write
jobs:
promote:
uses: Molecule-AI/molecule-ci/.github/workflows/auto-promote-staging-pr.yml@main
with:
gates: "ci.yml,e2e-staging-canvas.yml,..."
force: ${{ github.event.inputs.force == 'true' }}
secrets: inherit
The caller's `on.workflow_run.workflows` (display names) MUST stay in
sync with the `gates` input (filenames). The reusable can't validate
this because GitHub Actions decouples display names from filenames;
this is the same coupling the original molecule-core workflow had.
Migration of the existing 242-line molecule-core workflow to this
reusable is a follow-up PR. Same pattern applies to
molecule-controlplane once it grows protected-branch
auto-promote (today CP uses the auto-sync-main-to-staging shape
inherited from #142).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The validate-org-template.yml and validate-plugin.yml workflows
expected `.molecule-ci/scripts/` to be vendored INTO each calling
repo. That worked for the repos that copied the directory in, but
broke on the ones that didn't:
- molecule-ai-org-template-medo-smoke
- molecule-ai-org-template-molecule-worker-gemini
- molecule-ai-org-template-reno-stars
- molecule-ai-plugin-molecule-compliance
- molecule-ai-plugin-molecule-freeze-scope
- molecule-ai-plugin-molecule-prompt-watchdog
Surfaced when the secret-scan rollout PRs hit those repos and the
required validate check failed on missing
`.molecule-ci/scripts/requirements.txt`.
Mirror the same fix already in validate-workspace-template.yml: a
second `actions/checkout@v4` of molecule-ci into
`.molecule-ci-canonical/`, with script paths re-pointed accordingly.
Single source of truth — callers never need to vendor or sync.
Also adds `.molecule-ci-canonical` to the secret-scan SKIP_DIRS so
the side-checked-out tree doesn't get walked.
Callers can drop their vendored `.molecule-ci/scripts/` copies in a
follow-up cleanup. Both shapes work after this PR — the vendored
copy is harmless dead weight, not a conflict.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Independent post-merge review of #19 surfaced two more findings.
Both shipped here.
Q3 — abstract intermediates + multiple-concrete-classes.
The class-discovery filter from O1 (#19) only excluded BaseAdapter
itself. Two failure modes slipped through:
(a) A locally-defined abstract intermediate
`class FrameworkAdapter(BaseAdapter): @abstractmethod ...`
passed the filter, falsely satisfying "at least one
concrete subclass" while still being non-instantiable at
workspace boot.
(b) A template defining BOTH `class FrameworkAdapter(BaseAdapter)`
AND `class ConcreteAdapter(FrameworkAdapter)` had both pass
the filter, producing a silent ambiguity where the runtime's
class-discovery picks one per its resolution rules — wrong
class loaded after a future runtime refactor.
Fixes:
- Add `not inspect.isabstract(obj)` to the discovery filter so
abstract intermediates are excluded.
- Hard-error if `len(adapter_classes) > 1` listing both names so
the contributor knows exactly which classes are competing.
Three new tests pin the behaviors:
- test_abstract_intermediate_alone_does_not_count
- test_abstract_plus_concrete_passes_with_concrete_only
- test_multiple_concrete_baseadapter_subclasses_errors
Identity-based deduplication.
Caught against the real langgraph template during smoke-testing
the Q3 fix: production adapters often do
`Adapter = ConcreteAdapter` as a module-level alias for the
runtime's discovery convention. `vars(mod)` returns BOTH bindings
pointing at the same class object, so the new
multiple-concrete-classes error fired falsely on every aliased
template.
Fix: deduplicate by `id(obj)` BEFORE counting, so the same class
object under multiple bindings counts once. New regression test
test_aliased_concrete_class_is_deduplicated pins this against
any future filter regression.
Existing tests updated to use fully-concrete BaseAdapter subclasses
(matching production templates) since the new abstract-filter
correctly rejects partial stubs that don't override every abstract
method BaseAdapter declares (5 methods: name, display_name,
description, setup, create_executor).
Q5 — GITHUB_TOKEN scope lockdown.
validate-workspace-template.yml runs untrusted-by-design code from
the calling template repo: pip post-install hooks, adapter.py
imports, Dockerfile RUN steps. Each of those primitives executes
with GITHUB_TOKEN in env. The workflow had no `permissions:`
block, defaulting to whatever the calling repo grants — often
contents: write.
Add `permissions: contents: read` at the workflow level. Worst-
case-with-token now drops to "read public repo state" — no write
to issues, no push to branches, no comment-spam, no workflow
re-trigger. Partial mitigation; the deeper `pull_request_target`
discipline is bigger scope (tracked separately).
Verification:
- 47/47 tests pass (was 43; +3 abstract/multi-concrete + +1 alias)
- All 8 production templates pass the full updated validator
end-to-end with 0 warnings / 0 errors
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Cleanup of #19's commit, which inadvertently included scripts/__pycache__/
.pyc files generated by running pytest locally during the review-
followup work. The repo's .gitignore had no Python-cache section at
all, so nothing prevented this — adding it now to make the same
mistake structurally impossible.
Files removed from tracking (still ignored locally going forward):
- scripts/__pycache__/migrate-template.cpython-313.pyc
- scripts/__pycache__/test_migrate_template.cpython-313-pytest-9.0.3.pyc
- scripts/__pycache__/test_validate_workspace_template.cpython-313-pytest-9.0.3.pyc
- scripts/__pycache__/validate-workspace-template.cpython-313.pyc
Gitignore additions cover the standard set:
__pycache__/, *.pyc, *.pyo, *.pyd, .pytest_cache/, .mypy_cache/,
.ruff_cache/
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Independent code review of #17 (adapter runtime-load) and #18 (schema
versioning) surfaced four Required and three Optional findings worth
fixing before the patterns harden into the codebase.
Required:
R1: Delete .molecule-ci/scripts/{validate-workspace-template,
migrate-template}.py — dead-vendored mirror. The new validator
workflow invokes .molecule-ci-canonical/scripts/ (the canonical
clone), not .molecule-ci/scripts/. The mirror was the exact drift
class #90 is supposed to eliminate: next contributor would edit
one copy and silently diverge. Other workflows (validate-plugin,
validate-org-template) still use the legacy path and keep their
own scripts there — so removing OUR two files is asymmetric but
correct, and the legacy path can phase out organically.
R2: validate-workspace-template.yml's `cache-dependency-path` pointed
at the validator's own deps file (just `pyyaml>=6.0`). Pip cache
key never invalidated when the template added crewai/langgraph/
etc. Repoint to the calling repo's `requirements.txt`, which is
the file the heavy install actually uses one step later.
R3: `_check_schema_v1` looped `SCHEMA_V1_REQUIRED_KEYS` and re-emitted
"missing required key `template_schema_version`" — but the
dispatcher already verified the field is present + int before
reaching v1, so that branch was dead defensive code. Skip it
explicitly with a comment, but keep the field in the constant for
contract documentation + the unknown-keys filter.
R4: `_template_adapter_under_validation` was a fixed sys.modules key,
meaning back-to-back invocations in the same Python process
shared the slot. Use a per-call-unique name keyed on the absolute
path's hash. No observed bug today; defensive-only.
Optional:
O1: Class-discovery filter now also requires `__module__ == module_name`.
Without this, an `from molecule_runtime.adapters.base import
AbstractCLIAdapter` re-export would count as a "real" adapter,
masking the genuine "no concrete subclass" case the gate exists
to catch. Cheap and forward-proofs against any future abstract
intermediate the runtime might expose. Added a sibling test
pinning the new behavior.
O2: migrate-template.py's docstring claimed "uses ruamel.yaml when
available" but the implementation only ever calls `yaml.safe_dump`.
Replaced the lie with a clearer caveat block + a forward-pointer
to ruamel-when-comments-detected as a future enhancement.
O3: Reordered the workflow so the secret-scan step runs BEFORE
`pip install -r requirements.txt`. Same threat surface as the
Docker build smoke (which already runs first), but cheap defense-
in-depth: a malicious template PR adding a malicious dep to
requirements.txt now has its post-install hook execute AFTER the
secret scanner has already inspected the diff.
Test changes:
- test_adapter_with_no_baseadapter_subclass_errors updated for the
new error message ("no concrete class inheriting from").
- New test_only_imported_baseadapter_subclass_does_not_count pins
the O1 __module__-filter behavior.
- 43/43 tests pass (was 42/42 before the new test).
- Real langgraph template still passes the full validator end-to-end.
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the schema-versioning workstream of #90. Sets up the machinery
for "we will be updating a lot" (the user's framing) without forcing
the first real schema bump to discover semantics under deadline
pressure. Today every template is at v1; this PR adds the framework,
ships zero behavior change for v1 templates, and reserves v2+ for
when there's a concrete reason to bump.
Validator changes:
- `KNOWN_SCHEMA_VERSIONS = {1}` — the set the validator currently
accepts. Future bumps add to this set.
- `DEPRECATED_SCHEMA_VERSIONS: set[int] = set()` — versions accepted
with warning during a deprecation window.
- Per-version contract: `_check_schema_v1(config)` enforces the v1
REQUIRED_KEYS / OPTIONAL_KEYS / KNOWN_RUNTIMES contract — exactly
what the previous monolithic check_config_yaml did.
- Dispatch table: `SCHEMA_CHECKS = {1: _check_schema_v1}`. Versions
that aren't in the table hard-error.
- check_config_yaml() now: reads template_schema_version → emits
deprecation warning if applicable → dispatches to the right
SCHEMA_CHECKS entry → unknown versions hard-error with actionable
instructions ("add a SCHEMA_V<N> block").
- Schema versions are FROZEN once shipped: never edit a SCHEMA_V<N>
constant in place. To bump, ADD v<N+1> alongside, deprecate v<N>,
migrate consumers, drop v<N> next cycle. Header comment documents
the discipline.
New script `migrate-template.py`:
- `MIGRATIONS: dict[int, Callable[[dict], dict]]` registry — each
entry maps a SOURCE version to the function that produces the
next version's dict. Empty today.
- `migrate_config(config, from, to)` chains migrations sequentially.
Forward-only (errors on backward), errors on missing intermediate
steps (never silently skip), asserts every migration stamps its
output's template_schema_version.
- CLI: `migrate-template.py [--from N] [--to M] [--dry-run] DIR`.
Defaults: --from = whatever config.yaml declares, --to = highest
reachable from MIGRATIONS (currently 1, so a no-op).
Behavior change to the existing
test_missing_required_keys_errors test:
Previously the validator emitted 3 "missing required key" errors
when name/runtime/template_schema_version were all missing. Now it
short-circuits on missing version with a single actionable error —
listing downstream missing keys is noise on top of the real
problem (no version means we can't pick a contract). The test was
updated to pin the new behavior; a new sibling test
(test_missing_required_keys_under_v1_dispatch_errors) pins that v1
still lists name/runtime/etc. when present-with-v1.
Verification:
- 42/42 tests pass (20 prior + 9 new schema-dispatch tests in
test_validate_workspace_template.py + 17 new migrator tests in
test_migrate_template.py).
- Real langgraph template runs through the full updated validator
end-to-end with 0 warnings / 0 errors.
This + #17 means #90 is done end-to-end:
- Phase 2: validator green on all 8 templates as a required check (already shipped)
- Phase 2.5: adapter.py runtime-load contract (#17)
- Phase 3: schema versioning + migration framework (this PR)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds the third workstream of #90 (eliminate template repo drift): a
strong contract check that exercises adapter.py the same way the
runtime does at workspace boot. Without this, a template can have a
syntactically-valid Dockerfile + an adapter.py that ImportErrors at
runtime, build clean through Docker smoke, and crash on first user
prompt — exactly the human-error class #90 is meant to eliminate.
Existing checks ranked from weakest to strongest:
1. check_adapter() — text-grep for legacy `molecule_ai`
imports. Catches one specific footgun.
2. Docker build smoke — `docker build` succeeds. Doesn't RUN
the image, so adapter.py is never
imported. Misses every adapter-load
bug.
3. (NEW) check_adapter_runtime_load — imports adapter.py via the
same `importlib.spec_from_file_location`
path the runtime uses, and asserts at
least one class inherits from
molecule_runtime.adapters.base.BaseAdapter.
Hard-error conditions:
- adapter.py raises any exception during import (SyntaxError,
ImportError, NameError, etc.). Same exception would crash the
workspace at boot.
- No class in the module inherits from BaseAdapter. The runtime's
class-discovery silently falls through to the default langgraph
executor in this case — exactly the silent-failure shape the
contract is meant to catch.
Skip conditions:
- No adapter.py exists. Templates without one inherit the default
executor by design (policy, not drift).
- molecule-ai-workspace-runtime not importable in the validator
env. Warns loudly so the CI-config bug surfaces, but doesn't
hard-fail (we'd be reporting "your adapter is broken" when the
actual cause is missing infra).
Workflow update: validate-workspace-template.yml now installs the
template's requirements.txt before invoking the validator (or
falls back to installing molecule-ai-workspace-runtime alone if the
template has no requirements.txt). This satisfies the runtime-load
check's import dependencies the same way the workspace container
does at boot — `pip install -r requirements.txt`.
Verified locally:
- 20/20 tests in test_validate_workspace_template.py pass
(14 existing + 6 new).
- Real langgraph template passes the full new validator including
runtime-load (0 warnings, 0 errors).
- Surveyed all 8 production templates' adapter.py shapes; every
one already inherits from BaseAdapter, so this check turns green
on first run with zero per-template fixups needed.
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the README half of monorepo task #133. The v1 git tag now
exists at the current main HEAD (8b0fbac — includes the auto-promote
fail-loud fix from #15). Consumers should pin reusable-workflow refs
to @v1 so future breaking changes land on @v2 with @v1 staying
backward-compatible — same pattern as `actions/checkout@v4`.
This commit only updates the EXAMPLE adoption snippets in the
workflow headers. Existing consumers pinned at @main keep working
identically (the workflow content is unchanged); they migrate at
their own pace when next touching their CI. New consumers see @v1
as the recommended pin.
Touched:
- auto-promote-branch.yml (also added a paragraph explaining the
@v1 vs @main convention so future contributors don't reintroduce
@main as the recommendation)
- auto-promote-staging.yml (the snippet inside this file's header
references auto-promote-branch.yml, also moved to @v1)
- disable-auto-merge-on-push.yml
- publish-template-image.yml
The validate-* workflows (validate-plugin.yml, validate-org-template.yml,
validate-workspace-template.yml) don't have adoption snippets in their
headers — adding canonical examples there is a separate scope and not
part of this PR.
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Independent code review caught a Critical issue inherited from the
pre-extraction workflow: the branch-protection API call falls through
to '{}' on any non-200, then the empty-GATES check treats this as
"no gates configured (or API inaccessible)" and sets ok=true. Combined
with --ff-only being ancestry-only (not test-status), a green-but-
flaky source branch could ff-promote red commits to the target with
zero CI enforcement.
The conflation of three response classes is the bug:
200 with .contexts[] populated → honor the gates (correct)
200 with empty .contexts → "no gates configured" → ok=true (correct)
404 (no branch protection) → "no gates configured" → ok=true (correct)
403 (token lacks permission) → silently treated like 404 (BUG)
Use `gh api -i` to capture the HTTP status line and discriminate:
- 200 → extract body, proceed to gate-check loop
- 404 → legitimate fallback to --ff-only safety, log notice
- 403/401 → fail loud with a concrete fix ("add administration: read
to your caller's permissions block")
- any other → fail loud with the response prefix for debugging
Also:
- Update the README in the workflow header to document the
administration: read requirement.
- Add administration: read to molecule-ci's own self-caller
(auto-promote-staging.yml) so its behavior is preserved.
Verified locally against four real API responses:
- molecule-core/staging → HTTP 200, 8 gates → loop runs
- molecule-ci/main → HTTP 200, 0 gates → ok=true (notice)
- hackathon org-template/main → HTTP 200, 0 gates → ok=true (notice)
- this-repo-does-not-exist → HTTP 404 → legitimate fallback path
Closes a Critical from the post-merge review of #14.
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Splits auto-promote-staging.yml into:
- auto-promote-branch.yml — new reusable workflow with
`on: workflow_call`. Inputs `from-branch` (default 'staging') and
`to-branch` (default 'main'). Repo-agnostic: gates are read from
the consuming repo's branch protection at run time, not hardcoded.
- auto-promote-staging.yml — molecule-ci's own self-running flow,
now a ~25-line wrapper that calls the reusable workflow with
staging→main hardcoded. Trigger and behavior unchanged for
molecule-ci itself.
Adoption pattern in any consumer repo:
# .github/workflows/auto-promote.yml
name: Auto-promote staging → main
on:
push:
branches: [staging]
workflow_dispatch:
permissions:
contents: write
statuses: read
jobs:
promote:
uses: Molecule-AI/molecule-ci/.github/workflows/auto-promote-branch.yml@main
with:
from-branch: staging
to-branch: main
Excluded by policy: molecule-core + molecule-controlplane stay
manual per CEO directive 2026-04-24. Those repos do NOT adopt the
reusable workflow; the extraction adds no surface to repos that
don't call it.
Closes monorepo task #93.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
P6 Phase 1: enforce the workspace-template contract via CI on every
template-repo push, eliminating the slow drift that produced 8
copies of a 28-line Dockerfile in different states of decay.
The previous validator (50 lines, soft warnings only) couldn't
catch the cache-trap pattern (Dockerfile missing ARG RUNTIME_VERSION)
that silently shipped the previous runtime wheel during cascade
publishes — observed five times in a row on 2026-04-27. Hardened
into structural checks that fail CI, not just warn:
- Dockerfile must base on python:3.11-slim
- Dockerfile must declare ARG RUNTIME_VERSION AND reference
${RUNTIME_VERSION} in a RUN block (the arg has to be in the
layer's command line for docker to hash it into the cache key)
- Dockerfile must create the agent uid-1000 user (Claude Code
refuses --dangerously-skip-permissions as root for safety)
- Dockerfile must end at molecule-runtime — directly via
ENTRYPOINT or via a wrapper script that exec's it (claude-code
has entrypoint.sh for gosu drop-priv; hermes has start.sh to
boot the hermes-agent daemon first; both are allowed)
- config.yaml must have name + runtime + integer
template_schema_version. Quoted "1" fails — observed previously
in a copy-pasted template that the YAML loader turned into str
- requirements.txt must declare molecule-ai-workspace-runtime
Also fixed: the original validator's warning telling adapter.py
NOT to import molecule_runtime was backwards — that's the
canonical package name post-#87. Now it warns on the legacy
molecule_ai prefix instead.
Reusable workflow change: instead of running
.molecule-ci/scripts/validate-workspace-template.py (a per-template
vendored copy that drifts as the validator evolves), the workflow
now checks out molecule-ci itself into .molecule-ci-canonical and
runs the canonical script from there. Single source of truth —
every template runs the SAME contract on every CI run. The legacy
.molecule-ci/scripts/ directories in each template repo can be
deleted in a Phase 2 cleanup PR.
14 unit tests pin the contract:
- canonical template passes
- claude-code-style custom entrypoint passes when the wrapper
exec's molecule-runtime
- 5 Dockerfile drift modes each error individually
- 3 config.yaml drift modes each error/warn
- requirements.txt missing-runtime errors
- legacy molecule_ai import warns
- regression cover: modern molecule_runtime import does NOT
trigger the (deleted) backwards warning
All 8 production template repos pass the new contract today —
this PR locks in the current good state, it does not force any
template-repo edits.
Contract documented at docs/template-contract.md so the rules are
discoverable without reading the validator.
Closes the cascade cache trap that bit us 5x today. Each cascade
rebuild ran against the same Dockerfile + requirements.txt content,
producing the same docker layer cache key — so even though
publish-runtime had just shipped a new version, pip install hit the
cached layer with the OLD version.
Mechanism:
- Reusable workflow now accepts optional `runtime_version` input
- Forwarded as `--build-arg RUNTIME_VERSION=$VERSION` to docker build
- Templates that declare `ARG RUNTIME_VERSION` get cache-key
invalidation per-version (different ARG value → different cache
key → fresh pip install layer)
- Templates that don't declare the ARG silently ignore it (no
breakage; phased rollout)
Pairs with molecule-core PR #2181 (PyPI propagation wait + path
filter expansion). Together: cascade waits until PyPI serves the
new version, then fires with the version, templates rebuild against
that exact version with cache invalidation. No more "I shipped
0.1.X but image installs 0.1.X-1."
Phase 2 (separate PRs in template repos): each template's caller
forwards `${{ github.event.client_payload.runtime_version }}` and
each Dockerfile declares `ARG RUNTIME_VERSION` near pip install.
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Documents the new reusable workflow shipped in PR #10:
- Caller pattern (~10 lines per consuming repo) under Usage
- Full description in "What each workflow validates" — explains the
2026-04-27 motivation, the org-wide repo setting it pairs with,
and the false-positive note for CI bot pushes
Companion to molecule-core CONTRIBUTING.md update (PR #2177) which
documents the contract from the developer's perspective. Both must
land for the safety guards to be discoverable from where teams read.
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Reusable workflow that consumers call from their pr-guards.yml on
pull_request:synchronize. When a new commit is pushed to an open PR
that has auto-merge enabled, this disables auto-merge and posts a
comment so the operator must explicitly re-engage after verifying.
Background: on 2026-04-27, PR #2174 in molecule-core auto-merged
with only the first commit because the second commit was pushed
AFTER the merge queue had locked the PR's SHA. The second commit
ended up orphaned on a merged-and-deleted branch (the wider
"automatically delete head branches" repo setting now blocks the
push entirely; this workflow catches the race window where the PR
is queued but not yet merged).
Defense in depth — if both fixes are active:
1. Repo setting "delete branch on merge" prevents pushes to a
merged branch (post-merge orphan case).
2. This workflow catches in-queue races (push lands while the
queue is processing) by force-disabling auto-merge so the
operator must re-engage explicitly.
Together they cover the full lifecycle of "auto-merge enabled →
new commits arrive" without relying on operator discipline.
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Switches the bare-imports lint from an inline RUNTIME_MODULES list
to the _runtime_modules.json manifest emitted by molecule-core's
build_runtime_package.py. Eliminates the third place the runtime
module list lived — now the build script is the single source of
truth.
Tonight surfaced that the same closed list lived in three places
that drifted independently. The build script's TOP_LEVEL_MODULES
went stale on transcript_auth, the smoke-test step here had a
hardcoded mirror that would have drifted next time a top-level
module was added, and runtime-pin-compat tested transitively via
import molecule_runtime.main (which only catches breakage, not
drift). One source of truth fixes all three at once.
Implementation:
- pip download molecule-ai-workspace-runtime --no-deps to /tmp
- unzip _runtime_modules.json from the wheel
- merge top_level_modules + subpackages into the regex alternation
(subpackages can be bare-imported too — `from lib.pre_stop`)
- on any fetch failure (network, missing manifest in older wheel),
fall back to the inline list with a workflow warning so the lint
still runs but the operator knows to investigate
Two consequences:
- Templates rebuilt against runtime ≥ the version that ships the
manifest get the always-fresh list automatically.
- Templates rebuilt against the old wheel (pre-manifest) still get
the working inline list — no regression.
Future cleanup (separate PR after a few release cycles): once all
template repos have rebuilt at least once with the manifest path,
the inline fallback can shrink to a panic message.
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two new gates that would have prevented today's
post-#87 template-extraction bug parade:
1. **Bare-import lint** — fail-fast pre-build check that grep's
template *.py files for `from <runtime_module> import` (where
<runtime_module> is in the closed list mirroring workspace/*.py
basenames). When the runtime was bundled into workspace/, bare
imports resolved against sibling files; in standalone template
repos they explode at startup. Five separate templates shipped
broken on 2026-04-27 because of this exact pattern (claude-code:
plugins, executor_helpers, heartbeat, a2a_client, platform_auth;
langgraph: agent, a2a_executor; deepagents: a2a_executor;
gemini-cli: config, executor_helpers x2). The lint runs before
docker login + buildx setup so a bad PR returns red in seconds.
2. **Import every /app/*.py at boot** (deeper smoke) — replaces
`python -c "import adapter"` with a loop importing every Python
module at /app/. The old single-import didn't traverse to
sibling modules adapter.py imports lazily inside
`create_executor()` (the executor.py family). That's why the
hermes a2a-sdk migration bug and langgraph's bare a2a_executor
import slipped through every prior gate even though the boot
smoke "passed." Importing every module module-level forces all
imports to resolve, including those in executor.py.
Both gates use the closed-list pattern (deliberate, easy to update,
no false-positives on legit third-party imports). The runtime module
list mirrors the equivalent in scripts/build_runtime_package.py;
both should be updated together when a new top-level workspace
module ships.
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Today's incident: a template's adapter.py imported a symbol
(RuntimeCapabilities) from molecule_runtime that the published runtime
didn't yet export. The image built fine, the existing "smoke test"
inspected the entrypoint string and passed, and a broken :latest
shipped to GHCR. Every claude-code + hermes provision then hung in
"provisioning" status until the 10-min sweep marked them failed.
The old smoke test was named correctly but didn't actually exercise
anything — `docker inspect` doesn't catch ImportError. This change
splits the build/push step into three:
1. Build with `load: true, push: false` so the image lands on the
runner's local docker.
2. Smoke test runs `docker run ... python -c "import adapter"` against
the loaded image. This catches the version-skew class of bug
(adapter.py imports a symbol the installed runtime doesn't export),
plus syntax errors, missing files, and anything else that breaks
import-time.
3. Push :latest + :sha-* only if the smoke test passes. The push step
reuses the cached build, so it's fast.
Net cost: ~5s per publish (the docker run). Net benefit: broken images
can no longer poison :latest.
All 8 caller templates (claude-code, gemini-cli, hermes, langgraph,
crewai, autogen, deepagents, openclaw) inherit the gate automatically
since this is the reusable workflow they all call.
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>