Moves the canonical PR-based staging→main auto-promote flow into a
reusable workflow that protected-branch repos can call instead of
duplicating ~240 lines of YAML each.
Why two reusable variants in this repo:
auto-promote-staging.yml (existing — ff-only, direct push)
For repos WITHOUT required-status-checks branch protection.
Already used for molecule-ci, molecule-app, molecule-docs,
molecule-monorepo. Cannot satisfy protected-branch rules
requiring status checks "set by expected GitHub apps".
auto-promote-staging-pr.yml (THIS PR — PR-based)
For repos WITH required-status-checks. Opens (or reuses) a
staging→main PR, enables auto-merge, lets the merge queue land
it. Required path for molecule-core + molecule-controlplane
(per the 2026-04-28 incident where direct ff-only push was
failing GH006 on protected refs).
Inputs:
gates — CSV of workflow filenames to require green
target-branch — promote target (default: main)
source-branch — promote source (default: staging)
enabled-var — repo variable name gating rollout
(default: AUTO_PROMOTE_ENABLED)
merge-method — merge|squash|rebase (default: merge — matches
user preference for merge commits over squash)
force — pass through caller's workflow_dispatch.force input
Caller pattern (kept minimal — see header comment in the workflow):
on:
workflow_run:
workflows: [CI, ...]
types: [completed]
workflow_dispatch:
inputs:
force: ...
permissions:
contents: write
pull-requests: write
jobs:
promote:
uses: Molecule-AI/molecule-ci/.github/workflows/auto-promote-staging-pr.yml@main
with:
gates: "ci.yml,e2e-staging-canvas.yml,..."
force: ${{ github.event.inputs.force == 'true' }}
secrets: inherit
The caller's `on.workflow_run.workflows` (display names) MUST stay in
sync with the `gates` input (filenames). The reusable can't validate
this because GitHub Actions decouples display names from filenames;
this is the same coupling the original molecule-core workflow had.
Migration of the existing 242-line molecule-core workflow to this
reusable is a follow-up PR. Same pattern applies to
molecule-controlplane once it grows protected-branch
auto-promote (today CP uses the auto-sync-main-to-staging shape
inherited from #142).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The validate-org-template.yml and validate-plugin.yml workflows
expected `.molecule-ci/scripts/` to be vendored INTO each calling
repo. That worked for the repos that copied the directory in, but
broke on the ones that didn't:
- molecule-ai-org-template-medo-smoke
- molecule-ai-org-template-molecule-worker-gemini
- molecule-ai-org-template-reno-stars
- molecule-ai-plugin-molecule-compliance
- molecule-ai-plugin-molecule-freeze-scope
- molecule-ai-plugin-molecule-prompt-watchdog
Surfaced when the secret-scan rollout PRs hit those repos and the
required validate check failed on missing
`.molecule-ci/scripts/requirements.txt`.
Mirror the same fix already in validate-workspace-template.yml: a
second `actions/checkout@v4` of molecule-ci into
`.molecule-ci-canonical/`, with script paths re-pointed accordingly.
Single source of truth — callers never need to vendor or sync.
Also adds `.molecule-ci-canonical` to the secret-scan SKIP_DIRS so
the side-checked-out tree doesn't get walked.
Callers can drop their vendored `.molecule-ci/scripts/` copies in a
follow-up cleanup. Both shapes work after this PR — the vendored
copy is harmless dead weight, not a conflict.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Independent post-merge review of #19 surfaced two more findings.
Both shipped here.
Q3 — abstract intermediates + multiple-concrete-classes.
The class-discovery filter from O1 (#19) only excluded BaseAdapter
itself. Two failure modes slipped through:
(a) A locally-defined abstract intermediate
`class FrameworkAdapter(BaseAdapter): @abstractmethod ...`
passed the filter, falsely satisfying "at least one
concrete subclass" while still being non-instantiable at
workspace boot.
(b) A template defining BOTH `class FrameworkAdapter(BaseAdapter)`
AND `class ConcreteAdapter(FrameworkAdapter)` had both pass
the filter, producing a silent ambiguity where the runtime's
class-discovery picks one per its resolution rules — wrong
class loaded after a future runtime refactor.
Fixes:
- Add `not inspect.isabstract(obj)` to the discovery filter so
abstract intermediates are excluded.
- Hard-error if `len(adapter_classes) > 1` listing both names so
the contributor knows exactly which classes are competing.
Three new tests pin the behaviors:
- test_abstract_intermediate_alone_does_not_count
- test_abstract_plus_concrete_passes_with_concrete_only
- test_multiple_concrete_baseadapter_subclasses_errors
Identity-based deduplication.
Caught against the real langgraph template during smoke-testing
the Q3 fix: production adapters often do
`Adapter = ConcreteAdapter` as a module-level alias for the
runtime's discovery convention. `vars(mod)` returns BOTH bindings
pointing at the same class object, so the new
multiple-concrete-classes error fired falsely on every aliased
template.
Fix: deduplicate by `id(obj)` BEFORE counting, so the same class
object under multiple bindings counts once. New regression test
test_aliased_concrete_class_is_deduplicated pins this against
any future filter regression.
Existing tests updated to use fully-concrete BaseAdapter subclasses
(matching production templates) since the new abstract-filter
correctly rejects partial stubs that don't override every abstract
method BaseAdapter declares (5 methods: name, display_name,
description, setup, create_executor).
Q5 — GITHUB_TOKEN scope lockdown.
validate-workspace-template.yml runs untrusted-by-design code from
the calling template repo: pip post-install hooks, adapter.py
imports, Dockerfile RUN steps. Each of those primitives executes
with GITHUB_TOKEN in env. The workflow had no `permissions:`
block, defaulting to whatever the calling repo grants — often
contents: write.
Add `permissions: contents: read` at the workflow level. Worst-
case-with-token now drops to "read public repo state" — no write
to issues, no push to branches, no comment-spam, no workflow
re-trigger. Partial mitigation; the deeper `pull_request_target`
discipline is bigger scope (tracked separately).
Verification:
- 47/47 tests pass (was 43; +3 abstract/multi-concrete + +1 alias)
- All 8 production templates pass the full updated validator
end-to-end with 0 warnings / 0 errors
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Cleanup of #19's commit, which inadvertently included scripts/__pycache__/
.pyc files generated by running pytest locally during the review-
followup work. The repo's .gitignore had no Python-cache section at
all, so nothing prevented this — adding it now to make the same
mistake structurally impossible.
Files removed from tracking (still ignored locally going forward):
- scripts/__pycache__/migrate-template.cpython-313.pyc
- scripts/__pycache__/test_migrate_template.cpython-313-pytest-9.0.3.pyc
- scripts/__pycache__/test_validate_workspace_template.cpython-313-pytest-9.0.3.pyc
- scripts/__pycache__/validate-workspace-template.cpython-313.pyc
Gitignore additions cover the standard set:
__pycache__/, *.pyc, *.pyo, *.pyd, .pytest_cache/, .mypy_cache/,
.ruff_cache/
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Independent code review of #17 (adapter runtime-load) and #18 (schema
versioning) surfaced four Required and three Optional findings worth
fixing before the patterns harden into the codebase.
Required:
R1: Delete .molecule-ci/scripts/{validate-workspace-template,
migrate-template}.py — dead-vendored mirror. The new validator
workflow invokes .molecule-ci-canonical/scripts/ (the canonical
clone), not .molecule-ci/scripts/. The mirror was the exact drift
class #90 is supposed to eliminate: next contributor would edit
one copy and silently diverge. Other workflows (validate-plugin,
validate-org-template) still use the legacy path and keep their
own scripts there — so removing OUR two files is asymmetric but
correct, and the legacy path can phase out organically.
R2: validate-workspace-template.yml's `cache-dependency-path` pointed
at the validator's own deps file (just `pyyaml>=6.0`). Pip cache
key never invalidated when the template added crewai/langgraph/
etc. Repoint to the calling repo's `requirements.txt`, which is
the file the heavy install actually uses one step later.
R3: `_check_schema_v1` looped `SCHEMA_V1_REQUIRED_KEYS` and re-emitted
"missing required key `template_schema_version`" — but the
dispatcher already verified the field is present + int before
reaching v1, so that branch was dead defensive code. Skip it
explicitly with a comment, but keep the field in the constant for
contract documentation + the unknown-keys filter.
R4: `_template_adapter_under_validation` was a fixed sys.modules key,
meaning back-to-back invocations in the same Python process
shared the slot. Use a per-call-unique name keyed on the absolute
path's hash. No observed bug today; defensive-only.
Optional:
O1: Class-discovery filter now also requires `__module__ == module_name`.
Without this, an `from molecule_runtime.adapters.base import
AbstractCLIAdapter` re-export would count as a "real" adapter,
masking the genuine "no concrete subclass" case the gate exists
to catch. Cheap and forward-proofs against any future abstract
intermediate the runtime might expose. Added a sibling test
pinning the new behavior.
O2: migrate-template.py's docstring claimed "uses ruamel.yaml when
available" but the implementation only ever calls `yaml.safe_dump`.
Replaced the lie with a clearer caveat block + a forward-pointer
to ruamel-when-comments-detected as a future enhancement.
O3: Reordered the workflow so the secret-scan step runs BEFORE
`pip install -r requirements.txt`. Same threat surface as the
Docker build smoke (which already runs first), but cheap defense-
in-depth: a malicious template PR adding a malicious dep to
requirements.txt now has its post-install hook execute AFTER the
secret scanner has already inspected the diff.
Test changes:
- test_adapter_with_no_baseadapter_subclass_errors updated for the
new error message ("no concrete class inheriting from").
- New test_only_imported_baseadapter_subclass_does_not_count pins
the O1 __module__-filter behavior.
- 43/43 tests pass (was 42/42 before the new test).
- Real langgraph template still passes the full validator end-to-end.
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the schema-versioning workstream of #90. Sets up the machinery
for "we will be updating a lot" (the user's framing) without forcing
the first real schema bump to discover semantics under deadline
pressure. Today every template is at v1; this PR adds the framework,
ships zero behavior change for v1 templates, and reserves v2+ for
when there's a concrete reason to bump.
Validator changes:
- `KNOWN_SCHEMA_VERSIONS = {1}` — the set the validator currently
accepts. Future bumps add to this set.
- `DEPRECATED_SCHEMA_VERSIONS: set[int] = set()` — versions accepted
with warning during a deprecation window.
- Per-version contract: `_check_schema_v1(config)` enforces the v1
REQUIRED_KEYS / OPTIONAL_KEYS / KNOWN_RUNTIMES contract — exactly
what the previous monolithic check_config_yaml did.
- Dispatch table: `SCHEMA_CHECKS = {1: _check_schema_v1}`. Versions
that aren't in the table hard-error.
- check_config_yaml() now: reads template_schema_version → emits
deprecation warning if applicable → dispatches to the right
SCHEMA_CHECKS entry → unknown versions hard-error with actionable
instructions ("add a SCHEMA_V<N> block").
- Schema versions are FROZEN once shipped: never edit a SCHEMA_V<N>
constant in place. To bump, ADD v<N+1> alongside, deprecate v<N>,
migrate consumers, drop v<N> next cycle. Header comment documents
the discipline.
New script `migrate-template.py`:
- `MIGRATIONS: dict[int, Callable[[dict], dict]]` registry — each
entry maps a SOURCE version to the function that produces the
next version's dict. Empty today.
- `migrate_config(config, from, to)` chains migrations sequentially.
Forward-only (errors on backward), errors on missing intermediate
steps (never silently skip), asserts every migration stamps its
output's template_schema_version.
- CLI: `migrate-template.py [--from N] [--to M] [--dry-run] DIR`.
Defaults: --from = whatever config.yaml declares, --to = highest
reachable from MIGRATIONS (currently 1, so a no-op).
Behavior change to the existing
test_missing_required_keys_errors test:
Previously the validator emitted 3 "missing required key" errors
when name/runtime/template_schema_version were all missing. Now it
short-circuits on missing version with a single actionable error —
listing downstream missing keys is noise on top of the real
problem (no version means we can't pick a contract). The test was
updated to pin the new behavior; a new sibling test
(test_missing_required_keys_under_v1_dispatch_errors) pins that v1
still lists name/runtime/etc. when present-with-v1.
Verification:
- 42/42 tests pass (20 prior + 9 new schema-dispatch tests in
test_validate_workspace_template.py + 17 new migrator tests in
test_migrate_template.py).
- Real langgraph template runs through the full updated validator
end-to-end with 0 warnings / 0 errors.
This + #17 means #90 is done end-to-end:
- Phase 2: validator green on all 8 templates as a required check (already shipped)
- Phase 2.5: adapter.py runtime-load contract (#17)
- Phase 3: schema versioning + migration framework (this PR)
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds the third workstream of #90 (eliminate template repo drift): a
strong contract check that exercises adapter.py the same way the
runtime does at workspace boot. Without this, a template can have a
syntactically-valid Dockerfile + an adapter.py that ImportErrors at
runtime, build clean through Docker smoke, and crash on first user
prompt — exactly the human-error class #90 is meant to eliminate.
Existing checks ranked from weakest to strongest:
1. check_adapter() — text-grep for legacy `molecule_ai`
imports. Catches one specific footgun.
2. Docker build smoke — `docker build` succeeds. Doesn't RUN
the image, so adapter.py is never
imported. Misses every adapter-load
bug.
3. (NEW) check_adapter_runtime_load — imports adapter.py via the
same `importlib.spec_from_file_location`
path the runtime uses, and asserts at
least one class inherits from
molecule_runtime.adapters.base.BaseAdapter.
Hard-error conditions:
- adapter.py raises any exception during import (SyntaxError,
ImportError, NameError, etc.). Same exception would crash the
workspace at boot.
- No class in the module inherits from BaseAdapter. The runtime's
class-discovery silently falls through to the default langgraph
executor in this case — exactly the silent-failure shape the
contract is meant to catch.
Skip conditions:
- No adapter.py exists. Templates without one inherit the default
executor by design (policy, not drift).
- molecule-ai-workspace-runtime not importable in the validator
env. Warns loudly so the CI-config bug surfaces, but doesn't
hard-fail (we'd be reporting "your adapter is broken" when the
actual cause is missing infra).
Workflow update: validate-workspace-template.yml now installs the
template's requirements.txt before invoking the validator (or
falls back to installing molecule-ai-workspace-runtime alone if the
template has no requirements.txt). This satisfies the runtime-load
check's import dependencies the same way the workspace container
does at boot — `pip install -r requirements.txt`.
Verified locally:
- 20/20 tests in test_validate_workspace_template.py pass
(14 existing + 6 new).
- Real langgraph template passes the full new validator including
runtime-load (0 warnings, 0 errors).
- Surveyed all 8 production templates' adapter.py shapes; every
one already inherits from BaseAdapter, so this check turns green
on first run with zero per-template fixups needed.
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the README half of monorepo task #133. The v1 git tag now
exists at the current main HEAD (8b0fbac — includes the auto-promote
fail-loud fix from #15). Consumers should pin reusable-workflow refs
to @v1 so future breaking changes land on @v2 with @v1 staying
backward-compatible — same pattern as `actions/checkout@v4`.
This commit only updates the EXAMPLE adoption snippets in the
workflow headers. Existing consumers pinned at @main keep working
identically (the workflow content is unchanged); they migrate at
their own pace when next touching their CI. New consumers see @v1
as the recommended pin.
Touched:
- auto-promote-branch.yml (also added a paragraph explaining the
@v1 vs @main convention so future contributors don't reintroduce
@main as the recommendation)
- auto-promote-staging.yml (the snippet inside this file's header
references auto-promote-branch.yml, also moved to @v1)
- disable-auto-merge-on-push.yml
- publish-template-image.yml
The validate-* workflows (validate-plugin.yml, validate-org-template.yml,
validate-workspace-template.yml) don't have adoption snippets in their
headers — adding canonical examples there is a separate scope and not
part of this PR.
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Independent code review caught a Critical issue inherited from the
pre-extraction workflow: the branch-protection API call falls through
to '{}' on any non-200, then the empty-GATES check treats this as
"no gates configured (or API inaccessible)" and sets ok=true. Combined
with --ff-only being ancestry-only (not test-status), a green-but-
flaky source branch could ff-promote red commits to the target with
zero CI enforcement.
The conflation of three response classes is the bug:
200 with .contexts[] populated → honor the gates (correct)
200 with empty .contexts → "no gates configured" → ok=true (correct)
404 (no branch protection) → "no gates configured" → ok=true (correct)
403 (token lacks permission) → silently treated like 404 (BUG)
Use `gh api -i` to capture the HTTP status line and discriminate:
- 200 → extract body, proceed to gate-check loop
- 404 → legitimate fallback to --ff-only safety, log notice
- 403/401 → fail loud with a concrete fix ("add administration: read
to your caller's permissions block")
- any other → fail loud with the response prefix for debugging
Also:
- Update the README in the workflow header to document the
administration: read requirement.
- Add administration: read to molecule-ci's own self-caller
(auto-promote-staging.yml) so its behavior is preserved.
Verified locally against four real API responses:
- molecule-core/staging → HTTP 200, 8 gates → loop runs
- molecule-ci/main → HTTP 200, 0 gates → ok=true (notice)
- hackathon org-template/main → HTTP 200, 0 gates → ok=true (notice)
- this-repo-does-not-exist → HTTP 404 → legitimate fallback path
Closes a Critical from the post-merge review of #14.
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Splits auto-promote-staging.yml into:
- auto-promote-branch.yml — new reusable workflow with
`on: workflow_call`. Inputs `from-branch` (default 'staging') and
`to-branch` (default 'main'). Repo-agnostic: gates are read from
the consuming repo's branch protection at run time, not hardcoded.
- auto-promote-staging.yml — molecule-ci's own self-running flow,
now a ~25-line wrapper that calls the reusable workflow with
staging→main hardcoded. Trigger and behavior unchanged for
molecule-ci itself.
Adoption pattern in any consumer repo:
# .github/workflows/auto-promote.yml
name: Auto-promote staging → main
on:
push:
branches: [staging]
workflow_dispatch:
permissions:
contents: write
statuses: read
jobs:
promote:
uses: Molecule-AI/molecule-ci/.github/workflows/auto-promote-branch.yml@main
with:
from-branch: staging
to-branch: main
Excluded by policy: molecule-core + molecule-controlplane stay
manual per CEO directive 2026-04-24. Those repos do NOT adopt the
reusable workflow; the extraction adds no surface to repos that
don't call it.
Closes monorepo task #93.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
P6 Phase 1: enforce the workspace-template contract via CI on every
template-repo push, eliminating the slow drift that produced 8
copies of a 28-line Dockerfile in different states of decay.
The previous validator (50 lines, soft warnings only) couldn't
catch the cache-trap pattern (Dockerfile missing ARG RUNTIME_VERSION)
that silently shipped the previous runtime wheel during cascade
publishes — observed five times in a row on 2026-04-27. Hardened
into structural checks that fail CI, not just warn:
- Dockerfile must base on python:3.11-slim
- Dockerfile must declare ARG RUNTIME_VERSION AND reference
${RUNTIME_VERSION} in a RUN block (the arg has to be in the
layer's command line for docker to hash it into the cache key)
- Dockerfile must create the agent uid-1000 user (Claude Code
refuses --dangerously-skip-permissions as root for safety)
- Dockerfile must end at molecule-runtime — directly via
ENTRYPOINT or via a wrapper script that exec's it (claude-code
has entrypoint.sh for gosu drop-priv; hermes has start.sh to
boot the hermes-agent daemon first; both are allowed)
- config.yaml must have name + runtime + integer
template_schema_version. Quoted "1" fails — observed previously
in a copy-pasted template that the YAML loader turned into str
- requirements.txt must declare molecule-ai-workspace-runtime
Also fixed: the original validator's warning telling adapter.py
NOT to import molecule_runtime was backwards — that's the
canonical package name post-#87. Now it warns on the legacy
molecule_ai prefix instead.
Reusable workflow change: instead of running
.molecule-ci/scripts/validate-workspace-template.py (a per-template
vendored copy that drifts as the validator evolves), the workflow
now checks out molecule-ci itself into .molecule-ci-canonical and
runs the canonical script from there. Single source of truth —
every template runs the SAME contract on every CI run. The legacy
.molecule-ci/scripts/ directories in each template repo can be
deleted in a Phase 2 cleanup PR.
14 unit tests pin the contract:
- canonical template passes
- claude-code-style custom entrypoint passes when the wrapper
exec's molecule-runtime
- 5 Dockerfile drift modes each error individually
- 3 config.yaml drift modes each error/warn
- requirements.txt missing-runtime errors
- legacy molecule_ai import warns
- regression cover: modern molecule_runtime import does NOT
trigger the (deleted) backwards warning
All 8 production template repos pass the new contract today —
this PR locks in the current good state, it does not force any
template-repo edits.
Contract documented at docs/template-contract.md so the rules are
discoverable without reading the validator.
Closes the cascade cache trap that bit us 5x today. Each cascade
rebuild ran against the same Dockerfile + requirements.txt content,
producing the same docker layer cache key — so even though
publish-runtime had just shipped a new version, pip install hit the
cached layer with the OLD version.
Mechanism:
- Reusable workflow now accepts optional `runtime_version` input
- Forwarded as `--build-arg RUNTIME_VERSION=$VERSION` to docker build
- Templates that declare `ARG RUNTIME_VERSION` get cache-key
invalidation per-version (different ARG value → different cache
key → fresh pip install layer)
- Templates that don't declare the ARG silently ignore it (no
breakage; phased rollout)
Pairs with molecule-core PR #2181 (PyPI propagation wait + path
filter expansion). Together: cascade waits until PyPI serves the
new version, then fires with the version, templates rebuild against
that exact version with cache invalidation. No more "I shipped
0.1.X but image installs 0.1.X-1."
Phase 2 (separate PRs in template repos): each template's caller
forwards `${{ github.event.client_payload.runtime_version }}` and
each Dockerfile declares `ARG RUNTIME_VERSION` near pip install.
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Documents the new reusable workflow shipped in PR #10:
- Caller pattern (~10 lines per consuming repo) under Usage
- Full description in "What each workflow validates" — explains the
2026-04-27 motivation, the org-wide repo setting it pairs with,
and the false-positive note for CI bot pushes
Companion to molecule-core CONTRIBUTING.md update (PR #2177) which
documents the contract from the developer's perspective. Both must
land for the safety guards to be discoverable from where teams read.
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Reusable workflow that consumers call from their pr-guards.yml on
pull_request:synchronize. When a new commit is pushed to an open PR
that has auto-merge enabled, this disables auto-merge and posts a
comment so the operator must explicitly re-engage after verifying.
Background: on 2026-04-27, PR #2174 in molecule-core auto-merged
with only the first commit because the second commit was pushed
AFTER the merge queue had locked the PR's SHA. The second commit
ended up orphaned on a merged-and-deleted branch (the wider
"automatically delete head branches" repo setting now blocks the
push entirely; this workflow catches the race window where the PR
is queued but not yet merged).
Defense in depth — if both fixes are active:
1. Repo setting "delete branch on merge" prevents pushes to a
merged branch (post-merge orphan case).
2. This workflow catches in-queue races (push lands while the
queue is processing) by force-disabling auto-merge so the
operator must re-engage explicitly.
Together they cover the full lifecycle of "auto-merge enabled →
new commits arrive" without relying on operator discipline.
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Switches the bare-imports lint from an inline RUNTIME_MODULES list
to the _runtime_modules.json manifest emitted by molecule-core's
build_runtime_package.py. Eliminates the third place the runtime
module list lived — now the build script is the single source of
truth.
Tonight surfaced that the same closed list lived in three places
that drifted independently. The build script's TOP_LEVEL_MODULES
went stale on transcript_auth, the smoke-test step here had a
hardcoded mirror that would have drifted next time a top-level
module was added, and runtime-pin-compat tested transitively via
import molecule_runtime.main (which only catches breakage, not
drift). One source of truth fixes all three at once.
Implementation:
- pip download molecule-ai-workspace-runtime --no-deps to /tmp
- unzip _runtime_modules.json from the wheel
- merge top_level_modules + subpackages into the regex alternation
(subpackages can be bare-imported too — `from lib.pre_stop`)
- on any fetch failure (network, missing manifest in older wheel),
fall back to the inline list with a workflow warning so the lint
still runs but the operator knows to investigate
Two consequences:
- Templates rebuilt against runtime ≥ the version that ships the
manifest get the always-fresh list automatically.
- Templates rebuilt against the old wheel (pre-manifest) still get
the working inline list — no regression.
Future cleanup (separate PR after a few release cycles): once all
template repos have rebuilt at least once with the manifest path,
the inline fallback can shrink to a panic message.
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two new gates that would have prevented today's
post-#87 template-extraction bug parade:
1. **Bare-import lint** — fail-fast pre-build check that grep's
template *.py files for `from <runtime_module> import` (where
<runtime_module> is in the closed list mirroring workspace/*.py
basenames). When the runtime was bundled into workspace/, bare
imports resolved against sibling files; in standalone template
repos they explode at startup. Five separate templates shipped
broken on 2026-04-27 because of this exact pattern (claude-code:
plugins, executor_helpers, heartbeat, a2a_client, platform_auth;
langgraph: agent, a2a_executor; deepagents: a2a_executor;
gemini-cli: config, executor_helpers x2). The lint runs before
docker login + buildx setup so a bad PR returns red in seconds.
2. **Import every /app/*.py at boot** (deeper smoke) — replaces
`python -c "import adapter"` with a loop importing every Python
module at /app/. The old single-import didn't traverse to
sibling modules adapter.py imports lazily inside
`create_executor()` (the executor.py family). That's why the
hermes a2a-sdk migration bug and langgraph's bare a2a_executor
import slipped through every prior gate even though the boot
smoke "passed." Importing every module module-level forces all
imports to resolve, including those in executor.py.
Both gates use the closed-list pattern (deliberate, easy to update,
no false-positives on legit third-party imports). The runtime module
list mirrors the equivalent in scripts/build_runtime_package.py;
both should be updated together when a new top-level workspace
module ships.
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Today's incident: a template's adapter.py imported a symbol
(RuntimeCapabilities) from molecule_runtime that the published runtime
didn't yet export. The image built fine, the existing "smoke test"
inspected the entrypoint string and passed, and a broken :latest
shipped to GHCR. Every claude-code + hermes provision then hung in
"provisioning" status until the 10-min sweep marked them failed.
The old smoke test was named correctly but didn't actually exercise
anything — `docker inspect` doesn't catch ImportError. This change
splits the build/push step into three:
1. Build with `load: true, push: false` so the image lands on the
runner's local docker.
2. Smoke test runs `docker run ... python -c "import adapter"` against
the loaded image. This catches the version-skew class of bug
(adapter.py imports a symbol the installed runtime doesn't export),
plus syntax errors, missing files, and anything else that breaks
import-time.
3. Push :latest + :sha-* only if the smoke test passes. The push step
reuses the cached build, so it's fast.
Net cost: ~5s per publish (the docker run). Net benefit: broken images
can no longer poison :latest.
All 8 caller templates (claude-code, gemini-cli, hermes, langgraph,
crewai, autogen, deepagents, openclaw) inherit the gate automatically
since this is the reusable workflow they all call.
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
All 8 template repos are public → GHA-hosted minutes are free, so
there's no cost incentive to stay on the self-hosted Mac mini. The
only reason we started there was to avoid GHA rate limits (memory
feedback_selfhosted_runner); that concern doesn't apply here because:
- Linux/amd64 builds go native on ubuntu-latest (no QEMU emulation
from arm64 → amd64), so builds run ~2-3x faster.
- docker/login-action@v3 + GITHUB_TOKEN handles GHCR auth cleanly,
no Keychain gymnastics needed.
- No queue wait when the Mac mini is busy publishing canvas/platform
or running e2e.
Concretely this change:
- runs-on: [self-hosted, macos, arm64] → ubuntu-latest
- Drops the hand-rolled `auths` config step (macOS Keychain
workaround) in favour of `docker/login-action@v3`.
- Drops `docker/setup-qemu-action` (unnecessary for a linux/amd64
target on an amd64 runner).
- Uses setup-buildx@v3 to match the login-action major version.
Self-hosted Mac mini remains the runner for private-repo workflows
(follow-up PRs will migrate other public-repo workflows in
molecule-core).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Each Molecule-AI/molecule-ai-workspace-template-* repo currently has
no way to publish its Docker image. Tenants build locally via
workspace/rebuild-runtime-images.sh after a manual clone — which
means "merge template PR" and "template live on tenants" are two
separate manual steps per tenant.
This workflow is the `publish` half of that pipeline. Called from
each template repo via `uses: Molecule-AI/molecule-ci/.github/
workflows/publish-template-image.yml@main`, it:
- Derives runtime name from the caller repo (strip
`molecule-ai-workspace-template-` prefix) so per-repo wrappers
stay one-line.
- Builds linux/amd64 (self-hosted macOS arm64 runner + QEMU) and
pushes to `ghcr.io/molecule-ai/workspace-template-<runtime>:latest`
plus `:sha-<7>` for per-commit pinning.
- Uses the Keychain-avoiding GHCR auth pattern from canvas' publish
workflow — osxkeychain write fails under the locked launchd keychain
on the Mac mini runner; writing auths map directly works.
- Smoke-tests the pushed image by pulling and inspecting entrypoint.
Follow-up (not in this PR):
- Each template repo gets a ~10-line caller workflow.
- Monorepo provisioner.RuntimeImages map switches from bare
`workspace-template:<runtime>` (local-only) to
`ghcr.io/molecule-ai/workspace-template-<runtime>:latest`.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The grep-based secrets check matched literal credential patterns in
documentation (e.g., "sk-ant-..." in CLAUDE.md examples), causing
false-positive CI failures.
Replace with a Python script that:
- Skips .molecule-ci/ directory entirely
- Uses context-aware matching (requires quotes or assignment context)
- Filters out documentation examples with "..." or <example> markers
- Handles all three reusable workflows (plugin, workspace-template, org-template)
- Remove redundant nested checkout of molecule-ci in workflow_call jobs
- Add timeout-minutes to prevent hung jobs (plugin: 10m, workspace: 15m)
- Add pip cache using requirements.txt
- Add missing SKILL.md heading check in validate-plugin
- Add legacy import and runtime dependency warnings in workspace validation
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Adds standard credential gitignore (.env / *.pem / .secrets/ / .auth_token).
Per-CEO directive 2026-04-16: every plugin and template repo should
gitignore credentials so self-hosters can't accidentally commit real
tokens to public repos.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Heredocs in GitHub Actions YAML were being echoed as script text
instead of executed. Moving validation logic to scripts/ and running
via 'python3 .molecule-ci/scripts/validate-*.py' after checking out
the molecule-ci repo at .molecule-ci/ path.