Run #1010 (post-#46) succeeded all the way to push but failed with
"repository molecule-ai/platform does not exist" — the platform image
ECR repo had never been created (only platform-tenant existed).
Created the repo via:
aws ecr create-repository --region us-east-2 \
--repository-name molecule-ai/platform \
--image-scanning-configuration scanOnPush=true
This is a one-line workflow comment to satisfy the path-filter and
re-run the publish workflow against the now-existing repo. Closes#173
properly this time — pre-clone + inline ECR auth + ECR repo all in
place.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
CI run #987 (post-#45) showed `docker push` from shell still hits
"no basic auth credentials" — `aws-actions/amazon-ecr-login@v2`
writes auth to a step-scoped DOCKER_CONFIG that doesn't carry across
to the next shell step on Gitea Actions.
Fix: drop both `aws-actions/configure-aws-credentials@v4` and
`aws-actions/amazon-ecr-login@v2`. Run `aws ecr get-login-password |
docker login` inline in the same shell step as `docker build` +
`docker push`. AWS creds come from secrets via env vars, ECR token
is fresh per-step (12h validity is plenty), config.json lives in the
same shell process — auth state is guaranteed.
This is the operator-host manual approach mapped 1:1 into CI.
runner-base image already has aws-cli + docker (verified locally).
Closes#173 (fifth piece — and final, this matches the manual flow
exactly).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
CI run #946 (post-#43) confirmed `driver: docker` doesn't fix the ECR
push 401 either: buildx CLI inside the runner container talks to the
operator-host docker daemon (mounted socket), but the daemon doesn't
see the runner's ECR auth state, and the runner's buildx CLI doesn't
attach the auth header in a way the daemon accepts.
Drop buildx + build-push-action entirely. Plain `docker build` +
`docker push` from the runner container works because both use the
SAME docker socket + the SAME runner-container config.json (populated
by `aws ecr get-login-password | docker login` from amazon-ecr-login).
Trade-off: lose multi-arch support. We only ship linux/amd64 tenant
images today, so this is fine. If multi-arch becomes a requirement
later, we can revisit (likely with `docker buildx create
--driver=remote` pointing at an external buildkit, but that's
substantial infra work; not worth it for a single-arch shop).
Closes#173 (fourth piece — and hopefully last; this matches the
operator-host manual approach exactly).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
PR #38 + #41 fixed the Dockerfile-side clone issue. CI run #893 then
revealed two Gitea-Actions-specific issues with the unchanged buildx
config:
1. `failed to push: 401 Unauthorized` to ECR. Root cause: default
buildx driver `docker-container` spawns a buildkit container that
doesn't share the host's `~/.docker/config.json`, so the ECR auth
set up by amazon-ecr-login doesn't reach the push. Fix: pin
`driver: docker` so buildx delegates to the host daemon, which
already has the ECR creds.
2. `dial tcp ...:41939: i/o timeout` on `_apis/artifactcache/cache`.
Root cause: `cache-from/cache-to: type=gha` is GitHub-specific;
Gitea Actions has no compatible artifact-cache backend, so every
cache lookup fails after a 30s timeout. Fix: remove the cache-*
options. Cold-build cost is <10min for 37-repo clone + Go/Node
compile, acceptable. Could revisit with type=registry inline cache
later if rebuilds get painful.
With this + #38/#41, the workflow should run end-to-end on Gitea
Actions: pre-clone -> docker build (host daemon) -> ECR push.
Closes#173 (third and final piece).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
publish-workspace-server-image.yml could not run on Gitea Actions because
Dockerfile.tenant's stage 3 ran `git clone` against private Gitea repos
from inside the Docker build context, where no auth path exists. Every
workspace-server rebuild required a manual operator-host push.
Move cloning to the trusted CI context (where AUTO_SYNC_TOKEN — the
devops-engineer persona PAT — is naturally available). Dockerfile.tenant
now COPYs from .tenant-bundle-deps/, populated by the workflow's new
"Pre-clone manifest deps" step. The Gitea token never enters the image.
- scripts/clone-manifest.sh: optional MOLECULE_GITEA_TOKEN env embeds
basic-auth in the clone URL; redacted in log output. Anonymous fallback
preserved for future public-repo path.
- .github/workflows/publish-workspace-server-image.yml: new pre-clone
step before docker build; injects AUTO_SYNC_TOKEN. Fail-fast if the
secret is empty.
- workspace-server/Dockerfile.tenant: drop stage 3 (templates), COPY
from .tenant-bundle-deps/ instead. Header documents the prereq.
- .gitignore: ignore /.tenant-bundle-deps/ so a local build can't
accidentally commit cloned repos.
Verified locally: clone-manifest.sh with the devops-engineer persona
token cloned all 37 repos (9 ws + 7 org + 21 plugins, 4.9MB after
.git strip).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
scripts/clone-manifest.sh runs inside the platform Dockerfile build,
so a change to that script needs to retrigger publish. Without it,
the prior fix (clone via Gitea + lowercase org) didn't trigger this
workflow because scripts/ wasn't in the path filter.
Also serves as the file change to satisfy the path filter for THIS
push, retriggering publish-workspace-server-image now.
Two coupled cleanups for the post-2026-05-06 stack:
============================================
The plugin injected GITHUB_TOKEN/GH_TOKEN via the App's
installation-access flow (~hourly rotation). Per-agent Gitea
identities replaced this approach after the 2026-05-06 suspension —
workspaces now provision with a per-persona Gitea PAT from .env
instead of an App-rotated token. The plugin code itself lived on
github.com/Molecule-AI/molecule-ai-plugin-github-app-auth which is
also unreachable post-suspension; checking it out at CI build time
was already failing.
Removed:
- workspace-server/cmd/server/main.go: githubappauth import + the
`if os.Getenv("GITHUB_APP_ID") != ""` block that called
BuildRegistry. gh-identity remains as the active mutator.
- workspace-server/Dockerfile + Dockerfile.tenant: COPY of the
sibling repo + the `replace github.com/Molecule-AI/molecule-ai-
plugin-github-app-auth => /plugin` directive injection.
- workspace-server/go.mod + go.sum: github-app-auth dep entry
(cleaned up by `go mod tidy`).
- 3 workflows: actions/checkout steps for the sibling plugin repo:
- .github/workflows/codeql.yml (Go matrix path)
- .github/workflows/harness-replays.yml
- .github/workflows/publish-workspace-server-image.yml
Verified `go build ./cmd/server` + `go vet ./...` pass post-removal.
=======================================================
Same workflow used to push to ghcr.io/molecule-ai/platform +
platform-tenant. ghcr.io/molecule-ai is gone post-suspension. The
operator's ECR org (153263036946.dkr.ecr.us-east-2.amazonaws.com/
molecule-ai/) already hosts platform-tenant + workspace-template-*
+ runner-base images and is the post-suspension SSOT for container
images. This PR aligns publish-workspace-server-image with that
stack.
- env.IMAGE_NAME + env.TENANT_IMAGE_NAME repointed to ECR URL.
- docker/login-action swapped for aws-actions/configure-aws-
credentials@v4 + aws-actions/amazon-ecr-login@v2 chain (the
standard ECR auth pattern; uses AWS_ACCESS_KEY_ID/SECRET secrets
bound to the molecule-cp IAM user).
The :staging-<sha> + :staging-latest tag policy is unchanged —
staging-CP's TENANT_IMAGE pin still points at :staging-latest, just
with the new registry prefix.
Refs molecule-core#157, #161; parallel to org-wide CI-green sweep.
Gitea is case-sensitive on owner slugs; canonical is lowercase
`molecule-ai/...`. Mixed-case `Molecule-AI/...` refs fail-at-0s
when the runner tries to resolve the cross-repo workflow / checkout.
Same fix as molecule-controlplane#12. Mechanical case-correction;
no behavior change beyond making CI resolve again.
Refs: internal#46
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the gap that let issue #2395 ship: redeploy-fleet workflows reported
ssm_status=Success based on SSM RPC return code alone, while EC2 tenants
silently kept serving the previous :latest digest because docker compose up
without an explicit pull is a no-op when the local tag already exists.
Wire:
- new buildinfo package exposes GitSHA, set at link time via -ldflags from
the GIT_SHA build-arg (default "dev" so test runs without ldflags fail
closed against an unset deploy)
- router exposes GET /buildinfo returning {git_sha} — public, no auth,
cheap enough to curl from CI for every tenant
- both Dockerfiles thread GIT_SHA into the Go build
- publish-workspace-server-image.yml passes GIT_SHA=github.sha for both
images
- redeploy-tenants-on-main.yml + redeploy-tenants-on-staging.yml curl each
tenant's /buildinfo after the redeploy SSM RPC and fail the workflow on
digest mismatch; staging treats both :latest and :staging-latest as
moving tags; verification is skipped only when an operator pinned a
specific tag via workflow_dispatch
Tests:
- TestGitSHA_DefaultDevSentinel pins the dev default
- TestBuildInfoEndpoint_ReturnsGitSHA pins the wire shape that the
workflow's jq lookup depends on
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two follow-ups from #2335 review (tracked in #2336):
1. Add `concurrency:` block to publish-workspace-server-image.yml so
two rapid staging pushes don't race the same :staging-latest retag.
Group is per-branch (`${{ github.ref }}`) so staging and main can
build in parallel — they produce different :staging-<sha> tags and
last-write-wins on :staging-latest is acceptable across branches.
`cancel-in-progress: false` keeps in-flight builds — partially-pushed
images would break canary-fleet pin consistency.
2. Add redeploy-tenants-on-staging.yml. After #2335, every staging push
produces a fresh :staging-latest, but existing tenants only pick it
up on next reprovision. This workflow mirrors redeploy-tenants-on-
main but for staging:
- workflow_run-gated to branches: [staging]
- target_tag default 'staging-latest' (vs 'latest' for prod)
- CP_URL default https://staging-api.moleculesai.app
- CP_STAGING_ADMIN_API_TOKEN repo secret (operator must set)
- canary_slug empty by default — staging is itself the canary; no
sub-canary needed inside it. Soak still applies if operator
specifies a tenant for blast-radius control.
Schedule-vs-dispatch hardening matches sweep-cf-orphans/sweep-cf-
tunnels: hard-fail on auto-trigger when secret missing so misconfig
doesn't silently leave staging tenants on stale code; soft-skip on
operator dispatch.
Operator action required after merge:
Add CP_STAGING_ADMIN_API_TOKEN repo secret. Pull value from staging-
CP's CP_ADMIN_API_TOKEN env in Railway controlplane / staging
environment. Until set, the auto-trigger will fail the workflow run
(visible as red CI), surfacing the misconfiguration. Workflow runs
only on staging publish-workspace-server-image success, so no extra
load while it sits unconfigured.
Verification:
- YAML lint clean on both workflows.
- Reviewed redeploy-tenants-on-main as template; differences are scoped
to staging-specific values (URL, tag, secret name) + harden-on-missing-
secret pattern.
Refs #2335, #2336.
Root cause: this workflow only triggered on `branches: [main]`, but
staging-CP pins TENANT_IMAGE=:staging-latest (verified via Railway).
:staging-latest was only retagged on main push, so:
staging-branch code → never built → never reaches staging tenants
staging-CP serves → "yesterday's main" indefinitely
When staging→main was wedged (path-filter parity bug, canvas teardown
race — both fixed earlier today), :staging-latest stopped updating
entirely. RFC #2312 (chat upload HTTP-forward) landed on staging but
freshly-provisioned staging tenants kept failing chat upload because
they pulled pre-RFC-#2312 image. Verified by tearing down a fresh
tenant and observing the legacy "workspace container not running"
error from the docker-exec code path that RFC #2312 deleted.
Pre-2026-04-24 there was a related-but-different incident: TENANT_IMAGE
was a static :staging-<sha> pin that drifted 10 days behind. This new
incident is "the dynamic pin still drifts when its update workflow
doesn't fire."
Fix: add `staging` to the branches trigger. Tag policy is unchanged
(:staging-<sha> + :staging-latest on every push). canary-verify.yml
still runs on main push (workflow_run-gated to `branches: [main]`),
preserving the canary-verified :latest promotion for prod tenants.
Steady state after this:
- staging push → :staging-latest = staging-branch code → staging-CP
- main push → :staging-<sha> for canary, :staging-latest retag
(post-promote main code), and after canary green
→ :latest for prod tenants
What this does NOT change:
- canary-verify.yml flow (still main-only)
- redeploy-tenants-on-main.yml (still rolls prod fleet on main push)
- publish-canvas-image.yml (self-hosted standalone canvas; orthogonal)
- The :latest tag (canary-verified main, unchanged)
What this does fix:
- RFC #2312-class fixes that land on staging now actually reach
staging tenants without waiting for staging→main promote.
- The dogfooding observation "staging tenants seem to be running
yesterday's code" disappears as a class.
Drive-by: also fixed the typo in the path-filter list (was
`publish-platform-image.yml`, the actual file is
`publish-workspace-server-image.yml`).
Supply-chain hardening for the CI pipeline. 23 workflow files
modified, 59 mutable-tag refs replaced with commit SHAs.
The risk
Every `uses:` reference in .github/workflows/*.yml was pinned to a
mutable tag (e.g., `actions/checkout@v4`). A maintainer of an
action — or a compromised maintainer account — can repoint that
tag to malicious code, and our pipelines silently pull it on the
next run. The tj-actions/changed-files compromise of March 2025 is
the canonical example: maintainer credential leak, attacker
repointed several `@v<N>` tags to a payload that exfiltrated
repository secrets. Repos that pinned to SHAs were unaffected.
The fix
Replace each `@v<N>` with `@<commit-sha> # v<N>`. The trailing
comment preserves human readability ("ah, this is v4"); the SHA
makes the reference immutable.
Actions covered (10 distinct):
actions/{checkout,setup-go,setup-python,setup-node,upload-artifact,github-script}
docker/{login-action,setup-buildx-action,build-push-action}
github/codeql-action/{init,autobuild,analyze}
dorny/paths-filter
imjasonh/setup-crane
pnpm/action-setup (already pinned in molecule-app, listed here for completeness)
Excluded:
Molecule-AI/molecule-ci/.github/workflows/disable-auto-merge-on-push.yml@main
— internal org reusable workflow; we control its repo, threat model
is different from third-party actions. Conventional to pin to @main
rather than SHA for internal reusables.
The maintenance cost
SHA pinning means upstream fixes require manual SHA bumps. Without
automation, pinned SHAs go stale. So this PR also enables Dependabot
across four ecosystems:
- github-actions (workflows)
- gomod (workspace-server)
- npm (canvas)
- pip (workspace runtime requirements)
Weekly cadence — the supply-chain attack window is "minutes between
repoint and pull"; weekly auto-bumps don't help with zero-days
regardless. The point is to pull in non-zero-day fixes without
operator effort.
Aligns with user-stated principle: "long-term, robust, fully-
automated, eliminate human error."
Companion PR: Molecule-AI/molecule-controlplane#308 (same pattern,
smaller surface).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Root cause of the 2026-04-24 all-day E2E failure chain: Railway staging
CP had TENANT_IMAGE pinned to :staging-a14cf86 — a static SHA that had
silently drifted 10+ days stale. Every new tenant (including every E2E
run's fresh tenant) was spawned with that stale image, which predated
applyRuntimeModelEnv. Without applyRuntimeModelEnv, HERMES_DEFAULT_MODEL
never reached the workspace EC2 user-data, so install.sh fell back to
nousresearch/hermes-4-70b → openrouter → 401 "Missing Authentication
header" in every A2A reply.
Four correct fixes shipped today all got shadowed by this single stale
pin:
• template-hermes#19 (provider priority for openai/*)
• template-hermes#20 (decouple prefix-strip from bridge guard)
• molecule-controlplane#247 (force fresh /opt/adapter clone)
• molecule-core#1987 (E2E pins HERMES_CUSTOM_* as workaround)
Fix: publish each main build under both :staging-<sha> AND :staging-latest.
Change Railway staging CP's TENANT_IMAGE env to :staging-latest (done via
`railway variables --set` as part of this incident). Future main builds
then auto-propagate to new tenant provisions without any human in the
loop.
Safety: :staging-latest is the "most recent main build" — NOT a
canary-verified promotion. That distinction is preserved:
• Prod tenants still pull :latest (canary-verified, retagged by
canary-verify.yml only after the canary fleet green-lights a digest)
• Staging tenants now pull :staging-latest (every main build, pre-canary)
So staging becomes the canary: if a :staging-latest build regresses,
the staging canary fleet catches it before it can be promoted to :latest
for prod. This is what the canary design intended; the missing
:staging-latest tag was the hole.
Zero impact on image size / build time: Docker tags point at the same
digest, no duplicate push.
Follow-up: filed an issue tracking the need for CP's TENANT_IMAGE to
NEVER be pinned to a SHA in any environment — it must always float on a
named tag (:staging-latest for staging, :latest for prod).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
molecule-core is a public repo — GHA-hosted minutes are free. The
self-hosted Mac mini was only in play to dodge GHA rate limits
(memory feedback_selfhosted_runner), but for these specific
workflows it came with real costs:
- Docker-push workflows emulated linux/amd64 from arm64 via QEMU —
every canvas + platform image build ran ~2-3x slower than native.
- Six PRs worth of keychain-avoidance hacks in publish-* because
`docker login` on macOS writes to osxkeychain unconditionally,
and the Mac mini's launchd user-agent keychain is locked.
- Homebrew pin-down environment variables (HOMEBREW_NO_*) sprinkled
everywhere to work around the shared /opt/homebrew symlink mess
on the runner.
- Setup-python@v5 couldn't write to /Users/runner, so ci.yml
python-lint resorted to a hand-rolled Homebrew python3.11 dance.
- Single runner → fan-out contention; CodeQL's 45-min analysis
fought the canvas publish for the one slot.
Changes across the 7 workflows:
- runs-on: [self-hosted, macos, arm64] → ubuntu-latest (every job)
- publish-canvas-image + publish-workspace-server-image:
drop the hand-rolled auths-map step + QEMU setup + buildx v4
→ docker/login-action@v3 + setup-buildx@v3. Linux + amd64
target = native build.
- canary-verify + promote-latest: replace `brew install crane` +
HOMEBREW_NO_* incantations with imjasonh/setup-crane@v0.4.
- codeql.yml: drop `brew install jq` — jq is preinstalled on
ubuntu-latest.
- ci.yml shellcheck: drop the self-hosted existence check —
shellcheck is preinstalled via apt.
- ci.yml python-lint: replace the Homebrew python3.11 path dance
with actions/setup-python@v5 (which works fine on GHA-hosted),
add requirements.txt caching while we're there.
- Remove stale comments referencing "the self-hosted runner",
"Mac mini", keychain, osxkeychain etc.
The self-hosted Mac mini remains in service for private-repo
workflows only. Memory feedback_selfhosted_runner updated to
reflect the public-repo scope carve-out.
Net -96 lines across the 7 files.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Canvas's browser bundle issues fetches to both CP endpoints
(/cp/auth/me, /cp/orgs, ...) AND tenant-platform endpoints
(/canvas/viewport, /approvals/pending, /org/templates). They
share ONE build-time base URL. Baking api.moleculesai.app
broke tenant calls with 404; baking the tenant subdomain broke
auth. Tried both today and saw exactly one failure mode per
attempt.
Real fix: same-origin fetches + tenant-side split. Adds:
internal/router/cp_proxy.go # /cp/* → CP_UPSTREAM_URL
mounted before NoRoute(canvasProxy). Now a tenant serves:
/cp/* → reverse-proxy to api.moleculesai.app
/canvas/viewport,
/approvals/pending,
/workspaces/:id/*,
/ws, /registry, → tenant platform (existing handlers)
/metrics
everything else → canvas UI (existing reverse-proxy)
Canvas middleware reverts to `connect-src 'self' wss:` for the
same-origin path (keeping explicit PLATFORM_URL whitelist as a
self-hosted escape hatch when the build-arg is non-empty).
CI build-arg flips to NEXT_PUBLIC_PLATFORM_URL="" so the bundle
issues relative fetches.
Security of cp_proxy:
- Cookie + Authorization PRESERVED across the hop (opposite of
canvas proxy) — they carry the WorkOS session, which is the
whole point.
- Host rewritten to upstream so CORS + cookie-domain on the CP
side see their own hostname.
- Upstream URL validated at construction: must parse, must be
http(s), must have a host — misconfig fails closed.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Canvas's browser-side code (auth.ts, api.ts, billing.ts) all call
fetch(PLATFORM_URL + /cp/*). PLATFORM_URL comes from
NEXT_PUBLIC_PLATFORM_URL at build time; with the build arg unset,
it falls back to http://localhost:8080 in the compiled bundle.
That means on a tenant like hongmingwang.moleculesai.app, the
user's browser actually tried to fetch http://localhost:8080/cp/
auth/me — which resolves to the USER'S OWN machine, not the tenant.
Login redirect loops 404. Every tenant canvas has been unable to
complete a fresh login on this path; existing sessions only worked
because the cookie was already set domain-wide.
Fix: pass NEXT_PUBLIC_PLATFORM_URL=https://api.moleculesai.app
as a build arg in the tenant-image workflow. CP already allows
CORS from *.moleculesai.app + credentials, and the session cookie
is scoped to .moleculesai.app so tenant subdomains inherit it.
Verified in prod by rebuilding canvas locally with the flag and
hot-patching the hongmingwang instance via SSM. Baked chunks now
contain api.moleculesai.app; browser auth redirects resolve
cleanly to the CP.
Self-hosted users override by rebuilding with their own URL —
same pattern molecule-app uses with NEXT_PUBLIC_CP_ORIGIN.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Publish has been failing since the 2026-04-18 open-source restructure
(#964's merge) because workspace-server/Dockerfile still COPYs
./molecule-ai-plugin-github-app-auth/ but the restructure moved that
code out to its own repo. Every main merge since has produced a
"failed to compute cache key: /molecule-ai-plugin-github-app-auth:
not found" error — prod images haven't moved.
Fix: add an actions/checkout step that fetches the plugin repo into
the build context before docker build runs.
Private-repo safe: uses PLUGIN_REPO_PAT secret (fine-grained PAT with
Contents:Read on Molecule-AI/molecule-ai-plugin-github-app-auth).
Falls back to the default GITHUB_TOKEN if the plugin repo is public.
Ops: set repo secret PLUGIN_REPO_PAT before the next main merge, or
publish will fail with a 404 on the checkout step.
Also gitignores the cloned dir so local dev builds don't accidentally
commit it.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Completes the canary release train. Before this, publish-workspace-
server-image.yml pushed both :staging-<sha> and :latest on every
main merge — meaning the prod tenant fleet auto-pulled every image
immediately, before any post-deploy smoke test. A broken image
(think: this morning's E2E current_task drift, but shipped at 3am
instead of caught in CI) would have fanned out to every running
tenant within 5 min.
Now:
- publish workflow pushes :staging-<sha> ONLY
- canary tenants are configured to track :staging-<sha>; they pick
up the new image on their next auto-update cycle
- canary-verify.yml runs the smoke suite (Phase 2) after the sleep
- on green: a new promote-to-latest job uses crane to remotely
retag :staging-<sha> → :latest for both platform and tenant images
- prod tenants auto-update to the newly-retagged :latest within
their usual 5-min window
- on red: :latest stays frozen on prior good digest; prod is untouched
crane is pulled onto the runner (~4 MB, GitHub release) rather than
docker-daemon retag so the workflow doesn't need a privileged runner.
Rollback: if canary passed but something surfaces post-promotion,
operator runs "crane tag ghcr.io/molecule-ai/platform:<prior-good-sha>
latest" manually. A follow-up can wrap that in a Phase 4 admin
endpoint / script.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Default branch is now staging for both molecule-core and
molecule-controlplane. PRs target staging, CEO merges staging → main
to promote to production.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>