[core-lead-agent] Closes Core-Security audit finding (2026-05-09 audit cycle, MEDIUM):
1. workspace-server/internal/handlers/workspace_crud.go:335
`DELETE /workspaces/:id` returned `err.Error()` verbatim in the 500
body, leaking wrapped lib/pq driver strings (schema column names,
index hints) to HTTP clients. Replaced with sanitized message;
raw error already logged server-side via the existing log.Printf
immediately above.
2. workspace-server/internal/handlers/org.go:610
`OrgImport` echoed the user-supplied `body.Dir` verbatim in the 404
"org template not found: %s" response. Path traversal is already
blocked by resolveInsideRoot earlier in the handler, but echoing
raw input back lets a client probe filesystem layout (404-with-echo
vs. 400-from-resolve is itself a signal). Dropped the input from the
client-facing message; preserved full context in a new log.Printf
(orgFile path + the requested body.Dir) for operator triage.
Both fixes preserve operator-side diagnostics (logs unchanged in
content, only client-facing JSON sanitized). No behavior change for
legitimate clients — error type, status code, and JSON shape all stay
the same.
Tier: low. Defensive hardening only; reduces info-disclosure surface
without altering control-flow or auth gates.
Agent Comms tab rendered outbound delegations as blank bubbles because
extractRequestText only checked the A2A JSON-RPC format
(body.params.message.parts[].text) while delegation.go stores
request_body as {"task": "...", "delegation_id": "..."}.
Fix: check body.task first for delegation activities, then fall back to
the A2A format. Add six test cases covering the delegation shape,
precedence over A2A params when both present, empty-string guard, and
non-string type guard.
Closes#158.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
[FORCE-MERGE AUDIT — §SOP-7]
- Approver: hongming via chat-go ("go") in conversation transcript ~21:00 UTC on 2026-05-09
- Bypassed: required status checks (all pending — runner pickup issue, separate from PR correctness)
- Audit channel: orchestrator force-merge log + this commit message
Part of overnight team shipping cycle. PR authored by team persona under per-persona Gitea identity (post #156 merge).
Renames Docker network across all code, configs, scripts, and docs.
Per issue #93: the network was named molecule-monorepo-net as a holdover
from when the repo was called molecule-monorepo. The canonical repo name is
now molecule-core, so the network should be molecule-core-net.
Files changed:
- docker-compose.yml, docker-compose.infra.yml: network definition
- infra/scripts/setup.sh: docker network create
- scripts/nuke-and-rebuild.sh: docker network rm
- workspace-server/internal/provisioner/provisioner.go: DefaultNetwork
- All comments/docs: updated wording
Acceptance: grep -rn 'molecule-monorepo-net' returns zero matches.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
MCP delegate_task and delegate_task_async bypassed the delegation activity
lifecycle entirely — no activity_log row was written for MCP-initiated
delegations. As a result the canvas Agent Comms tab rendered outbound
delegations as bare "Delegation dispatched" events with no task body.
Fix: insert a delegation row (mirroring insertDelegationRow from
delegation.go) before the A2A call so the canvas can show the task text.
The sync tool updates status to 'dispatched' after the HTTP call; the
async tool inserts with 'dispatched' directly (goroutine won't update).
Closes#158.
Closes#49 (partial — addresses the canvas-display gap; full lifecycle
parity requires DelegationWriter extraction, tracked separately).
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Per issue #153: `docker compose up -d` (docker-compose.yml) did not start
Temporal because it lived only in docker-compose.infra.yml. Users had to know
to run `setup.sh` which explicitly uses `-f docker-compose.infra.yml`.
Adding `include: - docker-compose.infra.yml` makes the full infra stack
(starting with Temporal) start with the default `docker compose up` command.
Both compose files define postgres/redis — the main file's definitions take
precedence via compose merge semantics, so no service conflicts.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Major correction from Core-FE review:
- Canvas has THREE themes: System/Light/Dark, not dark-only
- Warm paper tones for light, zinc-adjacent dark for dark mode
- ThemeProvider handles switching, persisted in mol_theme cookie
- Use semantic tokens: bg-surface, bg-surface-card, border-line, text-ink
- NEVER use raw zinc for surfaces — only for borders/disabled/code
Updated:
- Section 1: Three-mode theme palette with exact hex values
- Section 4: Component patterns now use semantic tokens
- Added Section 4.6: ThemeProvider + useTheme() usage
- Section 7: Enforcement checklist now includes token rules
Co-Authored-By: Core-FE <core-fe@moleculesai.app>
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Cross-reference the Core-FE draft against actual molecule-core/canvas/src/
codebase. Creates two new docs:
- canvas-design-system-v1.md: Full design system with verified color
palette, typography scale, animation tokens (from theme-tokens.css),
component patterns, WCAG 2.1 AA checklist. Marks all items as
VERIFIED with source file citations.
- canvas-audit-items.md: Updated architecture brain dump with verified
findings on React Flow canvas accessibility. Flags remaining gaps
(screen reader announcements, keyboard shortcuts help, keyboard drag).
Key verified discrepancies from draft:
- Font: system-ui stack (not Inter/Geist)
- Tooltip: uses aria-describedby + role=tooltip (not group-hover CSS)
- Animation tokens: already defined in theme-tokens.css
- ContextMenu: has full keyboard nav (arrow keys, wrap-around)
Co-Authored-By: Core-FE <core-fe@moleculesai.app>
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Closes#155.
Without this, every commit from a workspace booted via the standard
provisioner lands with an empty `user.name`/`user.email` and Gitea
attributes the work to whichever PAT pushed (typically the founder's
`claude-ceo-assistant`), instead of the persona that actually authored
the commit. That's the same fingerprint pattern that got us suspended
on GitHub 2026-05-06.
GITEA_USER is already injected per-workspace by the provisioner from
workspace_secrets (verified: 8/8 Core-* workspaces have it set,
correctly-named, on operator + local). Boot picks it up unconditionally;
falls through cleanly if unset (e.g. legacy boxes without persona
identity wiring).
Email uses `bot.moleculesai.app` so agent commits are visually distinct
from human-authored commits in Gitea history. The `gitconfig` copy from
`/root/.gitconfig` to `/home/agent/.gitconfig` is now unconditional —
previously it was nested inside the `molecule-git-token-helper.sh`
block, which meant the per-persona identity wouldn't propagate to the
agent user when the helper was unavailable.
Also added an inline note that the github.com credential-helper block
is post-suspension legacy. Full removal tracked under #171; this PR
deliberately doesn't touch it (smaller blast radius).
Tested: docker exec sets the same config in 8 running Core-* workspaces
locally and they pick up correct identity for `git config -l`. Will
reset when those containers restart, hence this PR for the persistent
fix.
molecule-core/main branch protection requires the status-check context
'Secret scan / Scan diff for credential-shaped strings (pull_request)'
but the workflow lived only in .github/workflows/, which Gitea Actions
doesn't see — every PR's required-status-checks rollup left the context
in 'expected' / never-fires state, blocking merge.
Port to .gitea/workflows/secret-scan.yml. Drops:
- merge_group event (Gitea has no merge queue)
- workflow_call (no cross-repo reusable invocation on Gitea)
SELF exclude lists both .github/ and .gitea/ paths so a future sync
between them stays clean. Job + step names match the GitHub workflow
so the produced status-check context name matches branch protection
unchanged.
Same regex set as the runtime's pre-commit hook
(molecule-ai-workspace-runtime: molecule_runtime/scripts/pre-commit-checks.sh).
This unblocks PR #150 (audit-force-merge fan-out) and every future
PR on molecule-core/main.
Mirrors the canonical workflow shipped on internal#120 + #122. Same
shape: pull_request_target on closed, base.sha checkout, structured
JSON event to runner stdout that Vector ships to Loki on
molecule-canonical-obs.
REQUIRED_CHECKS env declares both molecule-core/main protected
contexts (sop-tier-check + Secret scan). Mirror against branch
protection if either is added/removed.
Verified end-to-end on internal: synthetic force-merge of internal#123
emitted incident.force_merge with all expected fields, indexable in
Loki via {host="molecule-canonical-1"} |= "incident.force_merge".
Tier: low (CI workflow, no platform code path).
Closes the post-PR-#174 self-review gap: the matched-pair contract
between ADMIN_TOKEN (server-side bearer gate) and NEXT_PUBLIC_ADMIN_TOKEN
(canvas client-side bearer attach) was descriptive only, living in a
.env file comment. Future agents/devs could re-misconfigure with one
of the two unset and silently 401 — every workspace API call refused
with no actionable diagnostic.
Adds checkAdminTokenPair() to canvas/next.config.ts, run after
loadMonorepoEnv() so it sees the post-load state. Two distinct
warnings (server-set/client-unset and the inverse) so an operator can
tell which half is missing without grep'ing. Empty string is treated
as unset so KEY= and unset KEY produce the same verdict.
Warn-only, not exit — production canvas Docker images bake these vars
at image-build time and a hard exit would turn a recoverable auth
issue into a crashloop. The console.error fires in `next dev`, the
standalone server's stdout, and the canvas Docker container logs —
the three places an operator looks when "everything 401s."
Tests pin exact stderr strings (per feedback_assert_exact_not_substring)
across 6 cases: both unset, both set, ADMIN_TOKEN-only, NEXT_PUBLIC-only,
empty-string-as-unset, and the empty-string-asymmetric mismatch.
Mutation-tested: flipping the if-condition from === to !== fails all 6.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The forks pool's implicit maxWorkers=1 (2-CPU runner) was insufficient
to prevent concurrent jsdom worker cold-starts. Each jsdom worker
allocates ~30-50 MB RSS at boot; multiple workers starting simultaneously
exhaust available memory, causing 5 test files to fail with:
[vitest-pool]: Failed to start forks worker for test files ...
[vitest-pool-runner]: Timeout waiting for worker to respond
Individual jsdom test files take 12-15 s in isolation and pass cleanly.
Failures only occur when 51 files are run together through the pool.
Fix: explicitly set maxWorkers:1 so a single worker processes all files
sequentially, eliminating concurrent jsdom bootstrap memory pressure.
With this change, all 51 files pass (was 46 pass + 5 fail), and suite
duration improves from ~5070 s to ~1117 s because workers no longer
compete for resources during startup.
Ref: issue #148
Ref: vitest-pool investigation for issue #22 (canvas side)
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Mirrors the canonical refactor: workflow YAML shrinks (env+invocation),
logic moves to .gitea/scripts/sop-tier-check.sh, debug echoes gated on
SOP_DEBUG, checkout@v6 pinned to base.sha.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Fans the security fix from internal#116 (cce89067) to molecule-core. Same
rationale: pull_request loads workflow from PR HEAD, allowing any
write-access contributor to rewrite the workflow file in their PR and
exfiltrate SOP_TIER_CHECK_TOKEN. pull_request_target loads from base
(main), neutralising the attack.
Verified post-merge on internal: synthetic PR rewriting the workflow to
print the token did NOT execute the modified version — main's
pull_request_target version ran instead. ATTACK_PROBE never fired.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase-1 fan-out of §SOP-6 enforcement to molecule-core. No branch
protection change in this PR — workflow runs and reports a status,
doesn't block any merge yet.
Branch protection update is the follow-up PR after the workflow
demonstrates a green run on its own PR, per the Phase 2 plan.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The org.import.started event was firing immediately after request body
bind, before the YAML at body.Dir was loaded. Result: payload.name was
"" whenever the caller passed `dir` (the common path — the canvas and
all live imports use dir, not inline template). Three started rows
already in the local platform's structure_events have empty name.
Fix: move the started emit (and importStart timestamp) to after the
YAML unmarshal / inline-template fallthrough, where tmpl.Name is
guaranteed populated.
Bonus: pre-parse error returns (invalid body, traversal-rejected dir,
file-not-found, YAML expansion fail, YAML unmarshal fail, neither dir
nor template provided) no longer emit an orphan started row — every
started is now guaranteed a paired completed/failed.
Verified live against running platform: re-imported molecule-dev-only,
new started row in structure_events carries
"Molecule AI Dev Team (dev-only)" instead of "".
Tests: full handler suite green (`go test ./internal/handlers/`).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Drops ~150 lines of duplicated cascade logic from the Delete HTTP
handler — workspace_crud.go's CascadeDelete (added in PR #137) and
Delete() were running the same #73 race-guard sequence (status update →
canvas_layouts → tokens → schedules → container stop → broadcast),
just with Delete() inlined and CascadeDelete owning the OrgImport
reconcile path.
CascadeDelete now returns the descendant id list (was: count) so
Delete() can drive the optional ?purge=true hard-delete against the
same set the cascade just touched.
Net diff: workspace_crud.go shrinks from ~270 lines in Delete() to
~75 lines (parse + 409 confirm gate + CascadeDelete call + stop-error
500 + purge block + 200 response). Behavior identical — same SQL
ordering, same #73 race guard, same response shapes. Three sqlmock
tests for the 0-children case gained one extra ExpectQuery for the
recursive-CTE descendants scan (the old inline code skipped that
query when len(children)==0; CascadeDelete walks unconditionally —
returns 0 rows, same end state, one extra cheap query).
Tests: full handler suite green (`go test ./internal/handlers/`).
Live-tested against the running local platform: DELETE on a fake
workspace returns `{"cascade_deleted":0,"status":"removed"}`,
fleet of 9 workspaces preserved, refactored handler matches the
prior wire-shape exactly.
Tracked as the PR #137 follow-up tech-debt item.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the additive-import zombie bug — re-running /org/import with a
tree shape that reparents same-named roles left the prior workspace
online because lookupExistingChild's dedupe is parent-scoped (different
parent_id → "different" workspace). Caught 2026-05-08 after a dev-tree
re-import left 8 orphans co-existing with the new tree on canvas until
manual cascade-delete.
Three layers in this PR:
- mode="reconcile" on /org/import — after the import loop, online
workspaces whose name matches an imported name but whose id isn't in
the result set are cascade-deleted. Default mode "" / "merge"
preserves existing additive behavior. Empty-set guards prevent
accidental "delete everything" if either array comes up empty.
- WorkspaceHandler.CascadeDelete extracted as a callable helper from
the existing Delete HTTP handler so OrgImport's reconcile path shares
the same teardown sequence (#73 race guard, container stop, volume
removal, token revocation, schedule disable, event broadcast). The
HTTP Delete handler still inlines the same logic; deduplication
tracked as tech-debt follow-up.
- emitOrgEvent(structure_events) records org.import.started +
org.import.completed with mode, created/skipped/reconcile_removed
counts, duration_ms, error. Replaces the lost-on-restart stdout-only
log shape for an audit-trail surface that's queryable by SQL. Closes
the "what happened at 20:13?" debugging gap that motivated this fix.
Verified live against the local platform: cascade-delete on an old
tree's removed root cleared 8 surviving orphans; mode="reconcile" with
a freshly-INSERTed fake orphan removed exactly the fake; idempotent
re-run of reconcile is a no-op (0 removed, no errors); structure_events
captures every started+completed pair with full payload.
7 new unit tests (walkOrgWorkspaceNames flat/nested/spawning:false/
empty-name; emitOrgEvent success + DB-error-swallow; errString). Full
handler suite green.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase 4 follow-up to template-claude-code PR #9 (2026-05-08 dev-tree wedge).
Pre-fix: applyRuntimeModelEnv unconditionally overwrote envVars["MODEL"]
with the MODEL_PROVIDER slug whenever payload.Model was empty (the restart
path). This silently wiped the operator'\''s explicit per-persona MODEL
secret on every restart.
Symptom: dev-tree workspaces booted correctly on first /org/import (the
envVars map was populated direct from the persona env file with both
MODEL=MiniMax-M2.7-highspeed and MODEL_PROVIDER=minimax), then on the
next Restart the MODEL secret got clobbered to literal "minimax" — a
provider slug, not a valid model id — and the workspace template'\''s
adapter failed to match any registry prefix, fell through to providers[0]
(anthropic-oauth), and wedged at SDK initialize.
Fix: resolution order in applyRuntimeModelEnv is now:
1. payload.Model (caller passed the canvas-picked model id verbatim)
2. envVars["MODEL"] (workspace_secret persisted from persona env)
3. envVars["MODEL_PROVIDER"] (legacy canvas Save+Restart shape)
Tests
-----
TestApplyRuntimeModelEnv_PersonaEnvMODELSecretPreserved — locks in
the new resolution order with four cases:
- MODEL secret wins over MODEL_PROVIDER slug (persona-env shape)
- MODEL secret wins even when same as MODEL_PROVIDER
- MODEL absent → fall back to MODEL_PROVIDER (legacy shape)
- Both absent → no MODEL set (no-op)
Existing TestApplyRuntimeModelEnv_SetsUniversalMODELForAllRuntimes
continues to pass — fix is strictly additive on the precedence chain.
Lets a workspace declare it (and its entire subtree) should be skipped
during /org/import. Pointer-typed `*bool` so we distinguish "explicitly
false" from "unset" (default = spawn).
## Use case
The dev-tree org template ships the full role taxonomy (Dev Lead with
Core Platform / Controlplane / App & Docs / Infra / SDK Leads, each with
their own engineering / QA / security / UI-UX children — 27 personas
total in a single import). Some setups need a smaller set:
- Local dev on a memory-constrained machine
- Demo / smoke runs that don't need the full org breathing
- Customer trials starting with leadership-only before fan-out
Pre-fix the only options were:
- Edit the canonical template (mutates shared state)
- Author a parallel slimmer template (duplicates structure)
- Manual workspace deprovision after full import (wasteful — already paid
the docker pull / build cost)
`spawning: false` is the per-workspace knob that solves this without
touching the canonical template structure.
## Semantics
- Unset: workspace spawns (current behaviour, no migration)
- `spawning: true`: explicitly spawns (same as unset)
- `spawning: false`: workspace is skipped AND every descendant is
skipped. The guard sits BEFORE any side effect in
createWorkspaceTree — no DB row, no docker provision, no children
recursion. A false-spawning subtree is genuinely a no-op except for
the log line. countWorkspaces still counts the subtree (so /org/templates
numbers reflect the full structure).
## Stage A — verified
Local dev-only template that wraps teams/dev.yaml (Dev Lead) with
children:[] cleared on the 5 sub-team yaml files, plus 3 floater
personas (Release Manager / Integration Tester / Fullstack Engineer).
/org/import returned 9 workspaces. Drop-in: same result via
`spawning: false` on each sub-tree root in the future.
## Stage B — N/A
Pure additive feature on the org-template handler. No SaaS deploy chain
implications.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
## org_import.go — persona env injection root-cause fix
The Phase-3 fix from earlier today (`feedback/per-agent-gitea-identity-default`)
introduced loadPersonaEnvFile to inject persona-specific creds into
workspace_secrets on /org/import. It passed `ws.Role` as the persona-dir
lookup key, but in our dev-tree org.yaml shape `role:` carries the
multi-line descriptive text the agent reads from its prompt
("Engineering planning and team coordination — leads Core Platform,
Controlplane, ..."), while `files_dir:` holds the short slug
(`core-lead`, `dev-lead`, etc.) matching
`~/.molecule-ai/personas/<files_dir>/env`.
isSafeRoleName silently rejected the multi-word role text → no persona
env loaded → every imported workspace booted with zero
workspace_secrets rows → no ANTHROPIC / CLAUDE_CODE / MINIMAX auth in
the container env → claude_agent_sdk wedged on `query.initialize()`
with a 60s control-request timeout.
After the fix, /org/import on the dev tree (27 personas) populates
8 workspace_secrets per workspace (Gitea identity + MODEL/MODEL_PROVIDER
+ provider-specific token), 5 of 6 leads boot online, and the
remaining wedges trace to a separate runtime-template-repo bug
(workspace-template-claude-code's claude_sdk_executor.py doesn't
dispatch on MODEL_PROVIDER=minimax — filed separately).
## Dockerfile.dev — docker-cli + docker-cli-buildx
Without these, every claude-code/tier-2 workspace POST fails-fast:
- docker-cli alone produces `exec: "docker": executable file not found`
- docker-cli alone (no buildx) fails on `docker build` with
`ERROR: BuildKit is enabled but the buildx component is missing or broken`
Both packages are now installed in the dev image; verified with
`docker exec molecule-core-platform-1 docker buildx version`.
## Stage A verified
Local /org/import dev-only path: 27 workspaces created, all 27 receive
persona env injection (8 secrets each — Gitea identity + provider creds).
Lead workspaces (claude-code-OAuth tier) boot online.
## Stage B — N/A
Local-dev-only path (docker-compose.dev.yml + dev image). Tenant EC2
provisioning uses Dockerfile.tenant (untouched).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Follow-up to the workspace-template visibility flip in 558e4fee. After
flipping the 5 private workspace-templates public (#192 root cause),
the harness-replays clone moved one step deeper to the org-templates
list, where 6 of 7 were also private. Hongming-confirmed flip plan:
- 5 of 6 (molecule-dev, free-beats-all, medo-smoke, molecule-worker-gemini,
ux-ab-lab) — flipped public per `feedback_oss_first_repo_visibility_default`.
These are unambiguously OSS-template-shape: generic README, no
customer-shaped names, no creds in content.
- 1 of 6 (reno-stars) — name itself is customer-shaped (would expose
customer/tenant identity). Kept private; removed from manifest.json
per Hongming. Will be handled at provision-time via the per-tenant
credential resolver designed in internal#102 (Layer-3 RFC).
Documents the OSS-surface contract in two places:
- manifest.json _comment: every entry MUST be public; Layer-3 lives elsewhere
- clone-manifest.sh comment block: rationale + the explicit ci-readonly
team-grant escape hatch (review-gated, not default).
Closes the second clone-fail layer of #192. Combined with 558e4fee +
the workspace-template visibility flips, the Pre-clone manifest deps
step should now succeed anonymously for the full registered set.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
5 of 9 workspace-template repos (openclaw, codex, crewai, deepagents,
gemini-cli) had been marked private with no team grant for AUTO_SYNC_TOKEN
bearer (devops-engineer persona). Pre-clone manifest deps step 404'd on
the first private repo encountered, failing every Harness Replays run.
Resolution path taken:
1. Flipped the 5 to public per `feedback_oss_first_repo_visibility_default`
— runtime/template/plugin repos default public; that's what makes them
OSS surface.
2. Scoped existing `ci-readonly` org team to legitimately-internal repos
only (compliance docs, RFCs-in-flight). Workspace templates removed
from it.
3. Filed internal#102 RFC for Layer-3 (customer-owned + marketplace
third-party private repos) — that's a different shape entirely;
needs per-tenant credential-resolver, not org-team grants.
This commit is a documentation-only touch on the workflow file to (a)
record the root cause inline next to the existing pre-clone-fail
narrative, (b) trigger a fresh Harness Replays run that should now pass
the clone step.
Closes#192.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Investigating molecule-core#129 failure mode #1 (claude-code "Agent
error (Exception)") needs the workspace's docker logs to find the
actual exception. The canary tears down the tenant on every failure,
so the workspace container is destroyed before anyone can SSM in.
Add a workflow_dispatch input `keep_on_failure: bool` (default false).
When true, sets `E2E_KEEP_ORG=1` for the canary script — its existing
debug path skips teardown, leaving the tenant + EC2 + CF tunnel + DNS
alive. Operator can then SSM into the workspace EC2 (via the same
flow as recover-tunnels.py) and capture `docker logs` from the
claude-code container.
Cron-triggered runs never set the input (it only exists on dispatch),
so unattended scheduled canaries always tear down — no risk of
unattended cost leak.
Operator workflow:
1. Dispatch canary-staging.yml with keep_on_failure=true
2. Watch CI; on failure (likely, given the 38h chronic red),
note the SLUG / TENANT_URL printed at step 1/11
3. SSM exec into the workspace EC2 (us-east-2) and run
`docker logs <claude-code-container>` to find the actual
exception traceback
4. Manually delete via DELETE /cp/admin/tenants/<slug> when done
(the script logs this reminder on E2E_KEEP_ORG=1 path)
Refs: molecule-core#129 (canary investigation)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Replaces the legacy nohup `go run ./cmd/server` setup with a fully
containerized local stack: postgres + redis + platform + canvas, all
with `restart: unless-stopped` so they survive Mac sleep/wake and
Docker Desktop daemon restarts.
## Changes
- **docker-compose.yml**
- `restart: unless-stopped` on platform/postgres/redis
- `BIND_ADDR=0.0.0.0` for platform — the dev-mode-fail-open default
of 127.0.0.1 (PR #7) made the host unable to reach the container
even with port mapping. Container netns is already isolated, so
binding all interfaces inside is safe.
- Healthchecks switched from `wget --spider` (HEAD → 404 forever
because /health is GET-only) to `wget -qO /dev/null` (GET).
Same regression existed on canvas; fixed both.
- **workspace-server/Dockerfile.dev**
- `CGO_ENABLED=1` → `0` to match prod Dockerfile + Dockerfile.tenant.
Without this, the alpine dev image fails with "gcc: not found"
because workspace-server has no actual cgo deps but the env was
forcing the cgo build path. Closes a divergence introduced in
9d50a6da (today's air hot-reload PR).
- **canvas/Dockerfile**
- `npm install` → `npm ci --include=optional` for lockfile-exact
installs that include platform-specific @tailwindcss/oxide native
binaries. Without these, `next build` fails with "Cannot read
properties of undefined (reading 'All')" on the
`@import "tailwindcss"` directive.
- **canvas/.dockerignore** (new)
- Excludes `node_modules` and `.next` so the Dockerfile's
`COPY . .` step doesn't clobber the freshly-installed container
node_modules with the host's (potentially stale or wrong-arch)
copy. This was the actual root cause of the canvas build break.
- **workspace-server/.gitignore**
- Adds `/tmp/` for air's live-reload build cache.
## Stage A verified
```
container status restart
postgres-1 Up (healthy) unless-stopped
redis-1 Up (healthy) unless-stopped
platform-1 Up (healthy, air-mode) unless-stopped
canvas-1 Up (healthy) unless-stopped
GET :8080/health → 200
GET :3000/ → 200
DB preserved: 407 workspace rows + 5 named personas
Persona mount: 28 dirs at /etc/molecule-bootstrap/personas
```
## Stage B — N/A
This is local-dev infrastructure only. None of these files ship to
SaaS tenants — production EC2s use `Dockerfile.tenant` + `ec2.go`
user-data, not docker-compose.
## Out of scope
- The decorative-but-broken `wget --spider` healthcheck has presumably
also been silently 404'ing on prod tenants. Ship a follow-up to
audit + fix the prod path; not done here to keep the PR scoped.
- Docker Desktop "Start at login" is a per-machine GUI setting that
must be toggled manually (Settings → General).
- The legacy heartbeat-all.sh that pinged 5 persona workspaces from
the host has been deleted (~/.molecule-ai/heartbeat-all.sh).
Per Hongming: each workspace is responsible for its own heartbeat.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The "Open issue on failure" step was failing on every canary run
because Gitea 1.22.6 doesn't expose /api/v1/actions endpoints
(per memory reference_gitea_actions_log_fetch). The threshold check
called github.rest.actions.listWorkflowRuns() to count consecutive
prior failures and gate issue creation behind 3 reds — that call
ALWAYS 404'd on Gitea, breaking the entire alerting step.
Net effect: the canary's own self-alerting was broken, so the
underlying staging regression went unflagged for 38h+
(2026-05-07 02:30 UTC → 2026-05-08 17:34 UTC, every cron tick red,
zero issues filed).
Fix: drop the consecutive-failures threshold entirely. File a
sticky issue on the FIRST failure; comment-on-existing handles
deduplication for subsequent failures. The auto-close-on-success
step is unchanged.
Why not a Gitea-compatible threshold (e.g., walk recent commit
statuses): comment-on-existing already gives ops a single
accumulating issue per regression streak. The threshold's purpose
was to avoid spamming on transient flakes — but with sticky issue
+ auto-close-on-green, transient flakes get one issue + one quick
close, which is fine signal. Filing on first failure is also
better UX: catches the regression in 30 min instead of 90 min.
Also: rewrote runURL from hardcoded https://github.com/... to
context.serverUrl so the link actually points at Gitea
(https://git.moleculesai.app) — was always broken on Gitea but
nobody noticed because the issue-filing step itself was broken.
Net: 21 insertions, 40 deletions. Removes WORKFLOW_PATH +
CONSECUTIVE_THRESHOLD env vars (no longer needed).
Tracked in: molecule-core#129 (failure mode 3 of 3)
Verification: yaml syntax-valid; no remaining github.rest.actions.*
calls; only github.rest.issues.* (all Gitea-supported per
memory feedback_persona_token_v2_scope).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes core#242 LOCAL surface. The PROD surface (CP user-data fetching
persona env files into tenant EC2's /etc/molecule-bootstrap/personas
via Secrets Manager) is filed as a follow-up.
WHAT THIS ADDS
Bind-mount on the platform service in docker-compose.yml:
${MOLECULE_PERSONA_ROOT_HOST:-${HOME}/.molecule-ai/personas}
→ /etc/molecule-bootstrap/personas (read-only)
Default source = ${HOME}/.molecule-ai/personas (the operator-host-mirrored
local dir populated by today's persona rotation work). Override via
MOLECULE_PERSONA_ROOT_HOST when running on a machine with a different
layout (CI runners, etc.).
WHY READ-ONLY
workspace-server only reads persona env files; never writes back. The
read-only mount enforces that contract — a hostile plugin install path
can't tamper with the persona credentials it's about to consume.
WHY THIS PATH MATCHES PROD
/etc/molecule-bootstrap/personas is the same in-container path the
prod tenant EC2 will use. Same code path (org_import.go::loadPersonaEnvFile)
reads the same file regardless of mode — local-dev parity with prod
per feedback_local_must_mimic_production.
STAGE A VERIFICATION
- docker compose config: resolves to /Users/hongming/.molecule-ai/personas
correctly (28 persona dirs visible at source path)
- Persona env file shape verified: dev-lead's env contains GITEA_USER,
GITEA_USER_EMAIL, GITEA_TOKEN_SCOPES, GITEA_SSH_KEY_PATH,
MODEL_PROVIDER=claude-code, MODEL=opus (lead tier matches Hongming's
2026-05-08 mapping)
- Full handler test suite green (TestLoadPersonaEnvFile_HappyPath +
7 sibling tests pass; rejection tests still catch path traversal)
- Build clean
STAGE B SKIPPED (with justification per § Skip conditions)
This change is config-only (docker-compose.yml volume addition). The
prod tenant EC2s do NOT use docker-compose.yml — they use CP user-data
+ ec2.go's docker run script. So this PR has no prod blast radius.
Stage B (staging tenant probe) would be checking 'is the platform
using the new compose mount' on a SaaS tenant — and SaaS tenants
don't run docker compose. The actual prod-surface change is the
follow-up issue.
PROD SURFACE — FOLLOW-UP FILED
Tenant EC2 user-data needs to fetch persona env files from operator
host (or AWS Secrets Manager per the established
feedback_unified_credentials_file pattern) and stage them at
/etc/molecule-bootstrap/personas inside the workspace-server container.
Touches molecule-controlplane/internal/provisioner/ec2.go user-data.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes core#115 partial. Schema-only change; the apply-endpoint filter
logic that reads this column lands with core#123 (drift detector +
queue + apply endpoint, the deferred follow-up of core#113).
Default 'production' so existing customers (Reno-Stars + any future
tenant) are default-safe. Synthetic dogfooding workspaces opt INTO
'canary' explicitly.
CHECK constraint pins the closed value set ('canary' | 'production') —
the apply endpoint's filter relies on the database to reject anything
else, so a future operator typo in PATCH /workspaces/:id ({update_tier:
'canery'}) returns a constraint violation, not silent fan-out to
nobody.
Partial index on canary rows since the apply-endpoint query path
('apply this update only to canary tier first') hits canary much more
often than production, and the production set is the much larger
default.
WHAT THIS DOES NOT DO (lands with core#123)
- PATCH endpoint to flip a workspace to canary
- The apply endpoint that consults the column
- Tests that exercise canary-vs-production fan-out
Schema-only foundation; same pattern as core#113 (workspace_plugins).
PHASE 4 SELF-REVIEW
Correctness: No finding — IF NOT EXISTS guards, DEFAULT clause means
existing rows get 'production' on migration apply.
Readability: No finding — comment block documents the tier semantics
+ the deferral to core#123.
Architecture: No finding — additive ALTER, partial index for the
expected access pattern.
Security: No finding — no code path; column constraint reduces blast
radius of bad PATCH input.
Performance: No finding — partial index minimizes write amplification
on the production-default rows.
REFS
core#115 — this issue
core#123 — apply endpoint follow-up (will exercise this column)
core#113 — version subscription DB foundation (sibling pattern)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes core#113 partial. Adds the DB foundation for the
version-subscription model. Drift detection + queue + admin apply
endpoint are follow-up scope (separate PR; filed as a new issue).
WHY THIS PR ONLY GETS US PART-WAY
Plugin install state today is filesystem-only — '/configs/plugins/<name>/'
inside the container. There's no DB record of 'plugin X installed at
workspace W from source S, tracking ref T'. That makes drift detection
impossible: nothing to compare upstream tags against.
This PR adds the table + the install-endpoint hook that writes to it.
With baseline tags now on every plugin (post internal#92), the table
starts collecting tracked-ref values immediately on the next install.
The actual drift-check job + queue + apply endpoint layer on top.
WHAT THIS ADDS
workspace_plugins table:
workspace_id FK → workspaces(id) ON DELETE CASCADE
plugin_name canonical name from plugin.yaml
source_raw full source URL the install used
tracked_ref 'none' | 'tag:vX.Y.Z' | 'tag:latest' | 'sha:<full>'
installed_at, updated_at
installRequest gains optional 'track' field (defaults to 'none').
Install handler upserts the workspace_plugins row after delivery
succeeds. DB write failure is logged but doesn't fail the install
(the plugin IS in the container; surfacing 500 misleads the caller).
validateTrackedRef enforces the closed set of accepted shapes:
'none' | 'tag:<non-empty>' | 'sha:<non-empty>'
Bare values like 'latest' / 'main' / version-strings without
prefix are rejected — the drift detector keys on prefix to know
what kind of resolution to do.
WHAT THIS DOES NOT ADD (filed separately)
- Drift detector job (cron / on-demand) that scans
'WHERE tracked_ref != none' rows and queues updates on upstream drift
- plugin_update_queue table (separate migration once detector lands)
- GET /admin/plugin-updates-pending and POST .../apply endpoints
- Tier-aware apply (core#115 — composes here)
PHASE 4 SELF-REVIEW (FIVE-AXIS)
Correctness: No finding — install endpoint behavior unchanged for
callers that don't pass 'track'. DB write is best-effort + logged
on failure. validateTrackedRef rejects ambiguous bare strings.
Readability: No finding — separate file plugins_tracking.go isolates
the new concern; install handler delta is a single 4-line block.
Architecture: No finding — additive table; existing schema untouched.
Migration 20260508160000_* uses the timestamp-prefixed convention.
Security: No finding — INSERT params via placeholders (no string
interpolation). validateTrackedRef rejects unexpected shapes before
the column constraint would.
Performance: No finding — one extra ExecContext per install. Install
is already seconds-scale (network fetch + tar + docker exec); rounds
to noise.
TESTS (1 new, all green)
TestValidateTrackedRef — pin closed set + structural validators
REFS
core#113 — this issue (foundation only; drift+queue+apply = follow-up)
internal#92, internal#93 — plugin/template baseline tags (now exists for tracking)
core#114 — atomic install (this PR composes — no atomicity regression)
core#115 — canary tier filter (will key off the same DB foundation)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes molecule-core#112. Composes with #114 (atomic install).
Before issuing restartFunc, classify the diff between staged and live:
- skill-content-only: only **/SKILL.md content changed
→ skip restart (Claude Code re-reads SKILL.md on
each Skill invocation; no in-memory cache)
- cold: anything else
→ restartFunc as before
(hooks/settings load at session start;
plugin.yaml is structural; added/removed files
require a fresh load)
DETECTION
- Hash every regular file in staged tree (host filesystem, sha256)
- Hash every regular file in live tree (in-container via docker exec
sh -c 'cd <livePath> && find . -type f -print0 | xargs -0 sha256sum')
- .complete marker dropped from comparison (mtime varies install-to-
install; including it would force-cold every reinstall)
- File added/removed → cold
- File content differs but isn't SKILL.md → cold
- All differences are SKILL.md basenames → skill-content-only
DEFAULTS COLD
- First install (no live tree) → cold
- Live tree read failure → cold (conservative; never hot-reload speculatively)
- Symlinks skipped during hash (same posture as tar walker)
PHASE 4 SELF-REVIEW
Correctness: No finding — all error paths default to cold; never
falsely classify as skill-content-only. The .complete drop is
a deliberate exception (the marker is bookkeeping, not content).
Readability: No finding — single-purpose helpers (hashLocalTree,
hashContainerTree, isSkillMarkdown, shQuote) each do one thing.
The classifier itself reads as 'compare set, then walk diff with
isSkillMarkdown gate.'
Architecture: No finding — composes existing execAsRoot primitive;
new helpers in plugins_classifier.go don't touch any other
handler. Old behavior unchanged when live read fails.
Security: No finding — shQuote single-quotes any non-trivial path,
pluginName comes from validatePluginName-validated source, and
the docker exec command takes the path as a single arg (xargs -0
handles binary-safe path delimiting). Symlinks skipped.
Performance: No finding — adds two tree walks (host + container)
per install. Container walk is one docker exec call returning
sha256 lines; for typical plugins (~10-50 files) round-trip is
~100ms. Versus the saved ~5-10s of restart on a hot-reloadable
update, this is a clear win.
TESTS (4 new, all green; full handler suite green)
TestIsSkillMarkdown — basename match, case-sensitive
TestHashLocalTree_StableHash — re-hash same dir = same map
TestHashLocalTree_SymlinkSkipped — hostile link doesn't poison classifier
TestShQuote — quoting boundary for shell injection safety
REFS
molecule-core#112 — this issue
molecule-core#114 — atomic install (.complete marker added there)
Reno-Stars iteration safety (Hongming 2026-05-08)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes molecule-core#114 for the docker (local-OSS) path.
EIC (SaaS) path tracked as a follow-up — same shape, different
exec primitives (ssh vs docker exec); shipping both in one PR
doubles the test surface.
THE FOUR-STEP DANCE
1. STAGE — docker.CopyToContainer extracts tar into
/configs/plugins/.staging/<name>.<ts>/
2. SNAPSHOT — if /configs/plugins/<name>/ exists, mv to
/configs/plugins/.previous/<name>.<ts>/
3. SWAP — atomic mv staging → live (single rename(2))
4. MARKER — touch /configs/plugins/<name>/.complete
Workspace-side plugin loaders should refuse to load any plugin dir
without .complete (separate small change, not in this PR — the marker
write is the necessary precursor; consumer side is a follow-up so
existing-content plugins don't break before they're re-installed).
ROLLBACK
- Stage failure: rm -rf staging dir; live untouched
- Snapshot failure: rm -rf staging dir; live untouched (no rename happened)
- Swap failure with snapshot present: mv previous back to live
- Swap failure (no snapshot): rm -rf staging; live (which never
existed) stays absent
- Marker failure: content already in place, log loudly with manual
recovery hint (touch <plugin>/.complete) — don't roll back since
the new content is what we wanted, just unmarked
GC
Best-effort delete of previous-version snapshot after successful
marker write. Failures non-fatal — next install or a separate
sweeper reclaims. Sweeper for stale .previous/* across reboots is
follow-up scope.
CONCURRENCY
Each install gets a unique stamp (UTC second precision), so two
concurrent reinstalls land in distinct staging dirs and the second
swap simply overwrites the first's live result. The atomicity is
per-install, not cross-install — by design (the platform serializes
POST /workspaces/:id/plugins via Go-side semaphore upstream of
this code, so cross-install collisions don't reach here).
CHANGES
+ plugins_atomic.go — installVersion + atomicCopyToContainer
+ plugins_atomic_tar.go — tarWalk/tarHostDirWithPrefix helpers
+ plugins_atomic_test.go — 5 unit tests (paths, stamp shape,
tar happy path, symlink-skip, prefix
normalization). All green.
~ plugins_install_pipeline.go::deliverToContainer — swap
copyPluginToContainer call to atomicCopyToContainer
Old copyPluginToContainer is retained (still called by Download()) so
this PR is purely additive on the install path; no public API change.
PHASE 4 SELF-REVIEW (FIVE-AXIS)
Correctness: Required (addressed) — swap-failure rollback writes mv
of previous back to live before returning the error; if rollback
itself fails, we wrap both errors and surface the combined fault.
Marker-write failure is treated as content-landed-but-unmarked
(LOG, don't roll back the new content).
Readability: No finding — installVersion path methods make the
/staging/.previous/live/marker layout obvious from one struct.
tarWalk extracted from the inline filepath.Walk in
plugins_install_pipeline.go for testability.
Architecture: No finding — atomicCopyToContainer composes existing
execAsRoot / docker.CopyToContainer primitives; no new dependencies.
Old copyPluginToContainer kept for Download() — single responsibility
per function.
Security: No finding — symlinks still skipped during tar walk
(defense vs hostile plugin escaping its own dir). Marker writes
use composeable path.Join, no user input touches the path.
Performance: No finding — adds ~3 docker exec calls per install
(mkdir, mv-snapshot, mv-swap, touch — actually 4) on top of the
one CopyToContainer. Each exec ~50-100ms in practice; install
end-to-end was already seconds-scale, this rounds to noise.
REFS
molecule-core#114 — this issue
Companion: molecule-core#112 (hot-reload classifier — depends on .complete marker)
Companion: molecule-core#113 (version subscription — uses install machinery)
EIC follow-up: separate issue to be filed for SaaS path parity
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes core#116. Brings local-dev iteration parity with the canvas's
Turbopack HMR — edit a Go file, see the platform restart in <5s
instead of running 'docker compose up --build' (~30s) per change.
USAGE
make dev # docker compose with air-driven live reload
make up # production-shape stack (no air, normal Dockerfile)
WHAT THIS ADDS
workspace-server/.air.toml — air watch config
workspace-server/Dockerfile.dev — air-on-golang:1.25-alpine, dev-only
docker-compose.dev.yml — overlay swapping platform service
to Dockerfile.dev + bind-mounting
workspace-server/ source
Makefile — make {dev,up,down,logs,build,test}
WHAT THIS DOES NOT TOUCH
workspace-server/Dockerfile (production multi-stage build)
docker-compose.yml (prod-shape stack)
CI workflows (build prod image directly)
Tenant deployment / SaaS (image swap stays the model)
Pure additive. Existing 'docker compose up' path unchanged; production
stays on the static binary. Air install pinned via go install at image
build time so the dev image is reproducible-enough for local use (we
don't pin air to a SHA — the dev image is rebuilt locally and updates
opportunistically).
PHASE 4 SELF-REVIEW (FIVE-AXIS)
Correctness: No finding — additive change, no existing path modified.
.air.toml watches .go + .yaml under workspace-server/, excludes
_test.go and tests dir so test edits don't trigger rebuild.
Dockerfile.dev mirrors prod's 'go mod download' so first rebuild
is fast.
Readability: No finding — three small files plus a Makefile, each
with header comments explaining the WHY, not just the WHAT. The
Makefile uses the standard ## help-target pattern.
Architecture: No finding — overlay pattern (docker-compose.dev.yml
on top of docker-compose.yml) is the standard compose convention
for env-specific overrides. Doesn't fork the prod path.
Security: No finding because no production code path; dev-only image
isn't built in CI and isn't published to ECR.
Performance: No finding — air debounce=500ms, exclude_unchanged=true
so a save that doesn't change content is a no-op rebuild.
REFS
core#116 — this issue
Companion: core#117 (workspace-side config-watcher for hot-reload of
config.yaml) — different scope; this issue is platform-only.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
TestStartSweeper_TransientErrorDoesNotCrashLoop leaks an in-flight
metric write across the test boundary: cycleDone fires inside the
fake's Sweep defer (before Sweep returns), waitForCycle returns
immediately after, cancel() lands, but the goroutine still has
metrics.PendingUploadsSweepError() to execute. Whether that write
happens before or after the next test's metricDelta() baseline read
is a coin-flip on slow CI hosts.
Outcome: TestStartSweeper_RecordsMetricsOnSuccess fails with
"error counter delta = 1, want 0" — looks like a real bug, isn't.
Instrumented analysis (per the file's existing waitForMetricDelta
docstring covering the same shape) confirms the metric IS getting
recorded, just AFTER the next test reads its baseline.
The Records* tests already use waitForMetricDelta to close this race
on their own assertions. This change extends the same shape to
TransientErrorDoesNotCrashLoop so it doesn't poison subsequent tests'
baselines.
Verified by running `go test -race -count=20 ./internal/pendinguploads/...`
locally — passes deterministically.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Trunk-based migration final cleanup for molecule-core. The 6 workflows
deleted here all existed to manage the staging↔main branch dance that
trunk-based makes obsolete:
- auto-promote-staging.yml fast-forward staging→main on green
- auto-promote-on-e2e.yml alt promote path on E2E green
- auto-promote-stale-alarm.yml alarm if staging promotion stalls
- auto-sync-main-to-staging.yml sync main→staging after UI merges
- auto-sync-canary.yml dry-run probe of the auto-sync
token+push path
- retarget-main-to-staging.yml rebase open PRs onto staging
After Phase 3A (PR #108 promoted 5 staging-only feature PRs to main)
and Phase 3B (PR #109 dropped staging-branch triggers from the 4 e2e
workflows), main is the only branch the CI cares about. None of the
above workflows have anything to do; they're 1977 lines of dead Go-time-
no-Gitea-time-yes code.
Rollback: `git revert` this commit to restore the workflows. They still
work mechanically; trunk-based just doesn't need them.
The `staging` branch on the remote is deleted in a follow-up step
(`git push origin --delete staging`) after this PR merges, so reviewers
can confirm CI runs cleanly on the new shape before the ref disappears.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Wires the 28 dev-tree persona credentials minted 2026-05-08 into the
workspace-secrets path used by org_import. When a workspace.yaml carries
`role: <name>`, the importer now reads
$MOLECULE_PERSONA_ROOT/<role>/env (default
/etc/molecule-bootstrap/personas/<role>/env, populated by the bootstrap
kit on the tenant host) and merges the role's GITEA_USER /
GITEA_TOKEN / GITEA_TOKEN_SCOPES / GITEA_USER_EMAIL /
GITEA_SSH_KEY_PATH into the same envVars map that already feeds
workspace_secrets via parseEnvFile + crypto.Encrypt + INSERT.
PRECEDENCE
Persona env is the LOWEST layer:
0. Persona env (per-role)
1. Org root .env (shared)
2. Workspace .env (per-workspace)
Each later layer overrides the previous, so a workspace .env can
pin a different GITEA_TOKEN if it ever needs to (testing, override).
WHY THIS LAYERING
Workspaces should boot with the role's identity by default. .env
files stay the explicit-override mechanism for the (rare) case where
a workspace needs to deviate. No new behavior for workspaces with no
role: persona load is silent no-op when ws.Role is empty or unsafe.
SECURITY
isSafeRoleName accepts only [A-Za-z0-9_-]+ (no '..', '/', or
separators) — admin-only construct, but defense-in-depth keeps the
persona dir shape invariant. Test
TestLoadPersonaEnvFile_RejectsTraversal pins the rejection set against
a planted target file.
OPERATOR-HOST CONTRACT
The 28 persona env files live at /etc/molecule-bootstrap/personas/<role>/env
(mode 600, owner root:root) with the per-role token-scope tailoring
Hongming approved 2026-05-08 (D5). Synced via task #241. Override via
MOLECULE_PERSONA_ROOT for tests + non-prod hosts.
TESTS (7 new, all green)
TestLoadPersonaEnvFile_HappyPath — typical persona-env shape
TestLoadPersonaEnvFile_MissingDir — silent no-op when file absent
TestLoadPersonaEnvFile_EmptyRole — silent no-op when role empty
TestLoadPersonaEnvFile_RejectsTraversal — planted file unreachable
via '../../etc/passwd' etc.
TestLoadPersonaEnvFile_DefaultRoot — falls back to /etc/...
TestLoadPersonaEnvFile_OverwritesEmptyMap
TestIsSafeRoleName_Acceptance — positive + negative role names
PHASE 4 SELF-REVIEW (FIVE-AXIS)
Correctness: No finding — additive change, silent no-op on the ws.Role==''
path covers every existing workspace; tests cover happy path + each
rejection mode + missing-dir.
Readability: No finding — helper sits next to parseEnvFile in
org_helpers.go with a comment block explaining WHY persona is
lowest precedence.
Architecture: No finding — fits the existing 'merge .env into envVars
then INSERT INTO workspace_secrets' pattern that's been in place
since the .env-driven workspace secrets feature; no new dependencies,
no new tables.
Security: Required (addressed) — path traversal blocked by
isSafeRoleName. No finding beyond that since persona files are
admin-managed and the helper does not log token values.
Performance: No finding — one extra os.ReadFile per workspace at
import time; amortized over workspace lifetime, cost is negligible.
REFS
internal#85 — RFC for SOP Phase 4 + structured Five-Axis (parent context)
Saved memories: feedback_per_agent_gitea_identity_default,
feedback_unified_credentials_file
Task #241 — operator-host sync (already DONE; populated 28 dirs)
Task #242 — this PR
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Harness Replays job failed at "dependency failed to start: container
harness-tenant-alpha-1 is unhealthy" — that is not caused by this
merge (which adds workspace-server/internal/handlers code, not
container infra). Retry to confirm it was a transient environmental
issue (likely operator-host load/disk per internal#78).
Trunk-based migration: main is the only branch. Update 4 workflows
that fired on staging-branch pushes to fire on main instead.
- e2e-staging-canvas.yml: drop staging from push + pull_request
- e2e-staging-external.yml: drop staging from push + pull_request
- e2e-staging-saas.yml: drop staging from push + pull_request,
update header comment that references the (now-obsolete)
staging→main auto-promote flow
- redeploy-tenants-on-staging.yml: workflow_run.branches changes
from [staging] to [main] so the tenant redeploy fires when
publish-workspace-server-image runs on main
Workflows that target the staging tenant FLEET (canary-staging.yml,
e2e-staging-sanity.yml) are not changed — they fire on cron, the word
"staging" in their filenames refers to the deployment target environ-
ment, not the git branch.
Lands as Phase 3b after #108 promotes the 5 staging-only feature PRs
(Phase 3a). Phase 3c deletes the obsolete promote/sync workflows
(auto-promote-staging, auto-sync-main-to-staging, etc.) plus the
staging branch itself, after we no-op-verify both Phase 3a and 3b
green.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This was supposed to fast-forward when each PR merged on staging,
but auto-promote-staging.yml has not been firing reliably on Gitea
since the GitHub suspension. Result: main is missing 5 substantive
feature PRs that landed on staging between 2026-04-29 and 2026-05-07:
- #102: test(org-include) symlink-based subtree composition contract
- #103: test(local-e2e) dev-department extraction end-to-end
- #104: fix(provisioner)+test EvalSymlinks templatePath; stage-2 e2e
- #105: feat(org-import) !external cross-repo subtree resolver (#222)
- #106: test(org-external) integration + e2e for !external resolver
Each PR was independently reviewed and CI-green at staging-merge time;
this commit promotes the merged state atomically. Use git log on main
after the merge to see the original PR-merge commits preserved.
Sister work: Phase 3 of internal#81 (trunk-based migration). Workflow
trigger updates land in a follow-up PR; staging-branch deletion happens
after a no-op verification deploy.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Five-Axis self-review pass on the !external resolver work (PRs #105+#106) caught three real issues that the unstructured 3-weakest review missed:
1. Cache validity gap — partial cache writes looked complete
2. Token persistence — token in URL userinfo got persisted to .git/config
3. Misleading function name post-refactor
This PR fixes all three:
- .complete marker file written atomically; wipe-and-refetch on partial cache
- Token via -c http.extraHeader, never embedded in URL
- Defense-in-depth ref .. deny (was already validated by repoSafeRefRegex but explicit + tested)
- Renamed buildCloneURL -> buildExternalCloneURL (collision with artifacts.go), rewriteFilesDirAndIncludes -> rewriteFilesDir
- Removed unused redactToken/shortHash helpers and crypto/sha1, encoding/hex, fmt imports
Approved by platform-engineer 2026-05-08T12:55Z.
Self-review of molecule-core PR #105 + #106 (the !external resolver
chain) surfaced 3 real correctness/security gaps and 2 readability
nits. Fixes all four in one PR since they're the same file's hardening.
(1) TOKEN LEAKAGE — fixed
Before: gitFetcher built clone URLs with auth in userinfo
(https://oauth2:TOKEN@host/repo.git). Two leak paths:
a. Token persisted in cloned repo's .git/config
b. Token could appear in clone error output captured via
cmd.CombinedOutput()
After: clone URL has no userinfo (https://host/repo.git). Auth is
layered on via -c http.extraHeader=Authorization: token ...
which sends the header per-request without persisting. Plus a
redactToken() pass over any error string before it surfaces in
fmt.Errorf, as belt-and-braces.
Tradeoff: token now visible in 'ps aux' for the duration of the
git child process (same as before via env var), but no longer
in any persistent state.
(2) CACHE-VALIDITY FOOTGUN — fixed
Before: cache-hit was 'cacheDir/.git exists'. A clone interrupted
after .git was created but before content finished writing would
leave a partially-written cache that subsequent imports treated
as hit, returning stale/incomplete content forever (no self-heal).
After: cache-hit also requires a .complete marker file written
only AFTER successful clone+rename. Partially-written cache is
treated as cache-miss and re-fetched cleanly (after RemoveAll
on the partial dir to avoid blocking the new clone's mkdir).
(3) REF '..' DENY — fixed
Before: safeRefPattern '^[a-zA-Z0-9_./-]+$' allowed '..' as a
substring. Git itself rejects most refs containing '..', but
defense-in-depth says don't depend on the downstream tool's
validation when sanitizing input at the boundary.
After: explicit strings.Contains(ref.Ref, '..') check.
(4) NAMING CLEANUP — fixed
Before: rewriteFilesDirAndIncludes() — name claims to rewrite
!include scalars but doesn't (we removed that during PR-A
development; double-prefix bug). Misleading for readers.
After: rewriteFilesDir(). Docstring updated to explicitly explain
why !include paths are NOT rewritten (relative to subDir, naturally
inside cache).
Also: removed unused buildAuthedURL() (replaced by
buildExternalCloneURL + authConfigArgs split), removed unused
shortHash() helper (replaced by os.MkdirTemp), removed unused
crypto/sha1 + encoding/hex + fmt imports, removed stray
'_ = fmt.Sprint' line in integration test.
NEW TESTS
- TestGitFetcher_RejectsRefWithDoubleDot (defense-in-depth on ref input)
- TestGitFetcher_CacheValidatedByCompleteMarker (partial cache → re-fetch)
VERIFIED LOCALLY 2026-05-08
Full ./internal/handlers/ suite: ok (7.8s, 14 external-resolver tests
+ all existing tests). Two new tests cover the two new behaviors.
Refs:
internal#77 — extraction RFC
molecule-core#105 (resolver), #106 (tests) — original implementation
Hongming code-review-and-quality skill invocation 2026-05-08 + 'fix all'
PR-B (local bare-git integration, task #233):
workspace-server/internal/handlers/org_external_integration_test.go
Three tests using git's GIT_CONFIG_COUNT/KEY/VALUE env-var-injected
insteadOf URL rewrite — process-scoped, no ~/.gitconfig pollution:
- TestGitFetcher_RealClone_LocalRedirect: full resolver chain end-to-
end with REAL git clone against a local bare-repo, asserts cache
population + content materialization + path rewrite + cache-hit on
second invocation.
- TestGitFetcher_RealClone_BadRefFails: nonexistent ref surfaces
git's error cleanly through the ls-remote step.
- TestGitFetcher_DirectFetch_CacheHit: gitFetcher.Fetch direct
invocation (no resolver wrapping); verifies cache-hit returns
same dir + same SHA, no clobber.
Production code untouched — insteadOf rewrite makes the production
gitFetcher think it's cloning from Gitea, but git rewrites at clone
time to file://<barePath>. Tests the real shell-out + parsing.
PR-C (live Gitea e2e, task #234):
workspace-server/internal/handlers/local_e2e_dev_dept_test.go
TestLocalE2E_ExternalDevDepartment — minimal parent template that
uses !external against the LIVE molecule-ai/molecule-dev-department
repo. No symlink, no /tmp/local-e2e-deploy fixture. Composition
resolves over network at import time.
Asserts:
- 28+ dev-tree workspaces resolve through the fetched cache
(matches the count from TestLocalE2E_DevDepartmentExtraction)
- Q1 placement: 'Documentation Specialist' present (under app-lead)
- Q2 placement: 'Triage Operator' present (under dev-lead)
- Every workspace's files_dir is cache-prefixed (proves rewrite ran)
- Every workspace's resolveInsideRoot+Stat succeeds
(would fail provisioning if not)
Skipped if Gitea unreachable (TCP probe to git.moleculesai.app:443)
or git binary absent — won't false-fail offline runners.
VERIFIED LOCALLY 2026-05-08:
--- PASS: TestGitFetcher_RealClone_LocalRedirect (0.26s)
--- PASS: TestGitFetcher_RealClone_BadRefFails (0.15s)
--- PASS: TestGitFetcher_DirectFetch_CacheHit (0.23s)
--- PASS: TestLocalE2E_ExternalDevDepartment (0.55s)
workspaces resolved through !external: 28
Full ./internal/handlers/ test suite: ok (no regressions)
Together with PR-A's unit tests (#105), the !external resolver is now
covered at three layers:
- unit (fakeFetcher injection): allowlist, validation, path rewrite
- integration (real git, local bare-repo): clone, cache, ls-remote
- e2e (real git, live Gitea, live dev-department): full chain
Refs:
internal#77 — extraction RFC (Phase 3a phasing in comment 1995)
task #233 (PR-B), task #234 (PR-C)
Hongming GO 2026-05-08 ('do PR-B/C/D')
Adds gitops-style cross-repo subtree composition to the platform's
org-template importer. Replaces (eventually) the operator-side
filesystem symlink approach shipped in PR #5.
DESIGN
See internal#77 comment 1995 for the full design doc + decision points
agreed with Hongming 2026-05-08.
Schema: a `!external`-tagged mapping anywhere a workspace entry is
allowed (workspaces:, roots:, children:):
- !external
repo: molecule-ai/molecule-dev-department
ref: main
path: dev-lead/workspace.yaml
url: git.moleculesai.app # optional; default = MOLECULE_EXTERNAL_GITEA_URL or git.moleculesai.app
At resolve time the platform fetches the repo at ref into a content-
addressable cache under <orgBaseDir>/.external-cache/<repo>/<sha>/,
loads <cacheDir>/<path>, recursively resolves nested !include /
!external in the loaded subtree, then rewrites every files_dir scalar
in the fully-resolved subtree to be cache-prefixed. Downstream
pipeline (resolveInsideRoot, plugin merge, CopyTemplateToContainer)
sees ordinary in-tree paths.
IMPLEMENTATION
- org_external.go: ExternalRef type, fetcher interface (gitFetcher
production + injectable for tests), resolveExternalMapping resolver,
rewriteFilesDirAndIncludes path-rewrite walker, allowlistedHostPath
+ safeRefPattern + safeRepoCacheDir validation helpers.
- org_include.go: 4-line hook in expandNode dispatching MappingNode
with Tag=="!external" to resolveExternalMapping.
- org_external_test.go: 8 unit tests with fakeFetcher injection
(no network):
* happy path (top + nested workspace files_dir cache-prefixed)
* allowlist rejection (github.com/foo/bar)
* path-traversal rejection (../../etc/passwd)
* malformed ref rejection ("main; rm -rf /")
* missing required fields (repo / ref / path)
* rewriteFilesDirAndIncludes basic + idempotent
* allowlistedHostPath env-override + glob
Path rewrite ONLY rewrites files_dir scalars. !include scalars are
NOT rewritten — they resolve relative to their containing file's
directory, which post-fetch is naturally inside the cache, so
relative !includes Just Work without modification.
ALLOWLIST + AUTH
- Default allowlist: git.moleculesai.app/molecule-ai/.
- Override: MOLECULE_EXTERNAL_REPO_ALLOWLIST (comma-separated
prefixes; trailing /* or / supported).
- Auth: MOLECULE_GITEA_TOKEN env var injected into clone URL.
Optional — falls back to unauthenticated for public repos.
- Reject: malformed refs, path-traversal, non-allowlisted hosts.
CACHE
- Location: <orgBaseDir>/.external-cache/<safe-repo>/<sha>/.
Operators add to .gitignore.
- Content-addressable: same (repo, sha) reuses cache, no overwrite.
- Atomic clone via tmp-then-rename.
- Concurrency: race-tolerant — last-writer-wins on same SHA.
GC out of scope for v1 (filed as parked follow-up).
SECURITY (per SOP Phase 2)
Untrusted yaml input — all validated:
repo: allowlist (default molecule-ai/* on Gitea host)
ref: ^[a-zA-Z0-9_./-]+$ regex (rejects shell injection)
path: relative-and-down-only (rejects ../escape)
Auth: read-only token scoped to allowed orgs.
Recursion: maxExternalDepth=4 (vs maxIncludeDepth=16) to limit
network fan-out cost.
Cache poisoning: per-(repo, sha) content-addressable; can't poison
across SHAs.
Trust boundary: cloned content treated identically to a sibling-
cloned subtree (same model as current symlink approach).
VERSIONING / BACKWARDS COMPAT
Pure additive. Existing !include and inline workspaces unchanged.
Existing dev-lead symlink (parent template PR #5) keeps working.
Migration of parent template to !external is a separate PR-D.
No DB schema change. No public API change.
VERIFIED LOCALLY
go test ./internal/handlers/ → ok (5.2s, all 8 new tests + existing)
Stub fetcher injection lets unit tests cover the resolver +
path-rewrite logic without network. PR-B (follow-up) adds an
integration test against a local bare-git repo. PR-C adds the
real-Gitea e2e test against the live dev-department repo.
Refs:
internal#77 — extraction RFC (comment 1995 = Phase 1+2 design)
task #222 — this PR is Phase 3a (PR-A in the design's phasing)
Hongming GO 2026-05-08 ('go' on 4 decision points + design)
Two changes that fall out of one root cause discovered while preparing
the local platform spin-up for the dev-department extraction (internal#77):
PROBLEM
CopyTemplateToContainer's filepath.Walk is called with templatePath
set to the workspace's resolved files_dir. With the cross-repo
symlink composition shipped in PR #5 (parent template's
dev-lead → ../molecule-dev-department/dev-lead/), the Dev Lead
workspace's files_dir is literally 'dev-lead' — i.e. the symlink
itself, not a path THROUGH the symlink.
filepath.Walk does not descend into a symlink leaf — it Lstats the
root, sees a symlink (mode bit set, not a directory), emits exactly
one entry, and returns. Result: the workspace's /configs/ tar would
ship empty. Other 38 workspaces are fine because their files_dir
paths just TRAVERSE the symlink (path resolution handles intermediate
symlinks via Lstat traversal); only the leaf-is-symlink case breaks.
FIX
workspace-server/internal/provisioner/provisioner.go:
Call filepath.EvalSymlinks on templatePath before filepath.Walk.
Resolves the leaf-symlink case for ALL templates, not just dev-dept.
Security: templatePath has already passed resolveInsideRoot's
path-string check at the call site; the trust boundary is the
operator-side /org-templates/ filesystem layout, not this
resolution step.
TEST
workspace-server/internal/handlers/local_e2e_dev_dept_test.go:
New TestLocalE2E_FilesDirConsumption — stage-2 of the local e2e.
For every workspace in the resolved OrgTemplate, asserts:
1. resolveInsideRoot(orgBaseDir, ws.FilesDir) succeeds.
2. os.Stat on the result returns a directory.
3. filepath.Walk after EvalSymlinks (mirroring the platform fix)
emits at least one file.
4. At least one workspace marker exists (workspace.yaml,
system-prompt.md, or initial-prompt.md).
Exercises the SECOND half of POST /org/import that
TestLocalE2E_DevDepartmentExtraction (PR #103) didn't cover.
VERIFIED LOCALLY (2026-05-08, against post-extraction Gitea state):
--- PASS: TestLocalE2E_FilesDirConsumption (0.05s)
checked 39 workspaces with files_dir
All 39 walk paths emit non-empty file sets with valid workspace markers.
REGRESSION GUARD
Without the EvalSymlinks fix, this test fails on Dev Lead with:
files_dir 'dev-lead' at '/.../molecule-dev/dev-lead' is empty —
CopyTemplateToContainer would produce empty /configs/
Refs:
internal#77 — extraction RFC
molecule-core#102 (resolver symlink contract test)
molecule-core#103 (stage-1 e2e: include resolution)
Hongming GO 2026-05-08 ('go' on the 3 pre-spin-up optimizations)
Phase 4 (local-only) of internal#77 (dev-department extraction).
Adds TestLocalE2E_DevDepartmentExtraction that exercises the FULL platform
import path against the real molecule-ai-org-template-molecule-dev (post-slim)
and molecule-ai/molecule-dev-department (post-atomize) repos cloned as siblings
under /tmp/local-e2e-deploy/.
What it proves end-to-end:
- The dev-lead symlink at parent's template root is followed by
resolveYAMLIncludes (filepath.Abs/Rel-style security check passes,
os.ReadFile follows the link).
- Recursive !include chain through the symlinked subtree resolves:
parent's org.yaml → !include dev-lead/workspace.yaml (symlinked)
→ !include ./core-lead/workspace.yaml → !include ./core-be/workspace.yaml
(atomized children: paths, no '..').
- 39 workspaces enumerate after resolution: 5 PM-tree + 6 Marketing-tree
+ 28 dev-tree (Dev Lead + 5 sub-team leads + 18 leaf workspaces +
3 floaters + 1 triage-operator).
- Q1+Q2 placements verified by sentinel name check: 'Documentation
Specialist' is reachable (under app-lead via app-docs sub-team),
'Triage Operator' is reachable (direct child of Dev Lead).
Test skips with t.Skipf if the local-e2e fixture isn't present on the
host — won't block CI on hosts that haven't set it up. To set up locally:
TESTROOT=/tmp/local-e2e-deploy
mkdir -p $TESTROOT && cd $TESTROOT
git clone https://git.moleculesai.app/molecule-ai/molecule-ai-org-template-molecule-dev.git molecule-dev
git clone https://git.moleculesai.app/molecule-ai/molecule-dev-department.git
cd /Users/<you>/molecule-core/workspace-server
go test -v -run TestLocalE2E_DevDepartmentExtraction ./internal/handlers/
Verified locally 2026-05-08:
--- PASS: TestLocalE2E_DevDepartmentExtraction (0.01s)
total workspaces (recursive): 39
Refs:
internal#77 — extraction RFC
molecule-core PR #102 — symlink-resolution contract test
molecule-ai/molecule-dev-department PRs #1, #2, #3 (scaffold + extract + atomize)
molecule-ai/molecule-ai-org-template-molecule-dev PR #5 (parent slim + symlink wire)
Hongming GO 2026-05-08 ('lets not go for staging right now, we do local test first')
SOP Phase 4 (local) — task #226
Two new tests in workspace-server/internal/handlers/org_include_test.go:
- TestResolveYAMLIncludes_FollowsDirectorySymlink: parent template's
org.yaml `!include`s into a sibling-repo subtree via a relative
directory symlink. The resolver's filepath.Abs/Rel security check
operates on path strings (passes), and os.ReadFile follows the
symlink at OS layer (file content delivered). Recursive nested
`!include`s within the symlinked subtree resolve correctly because
filepath.Dir(absTarget) keeps the literal symlink path as currentDir.
- TestResolveYAMLIncludes_RejectsSymlinkEscapingRoot: companion test
pinning current behavior where a symlink target outside the parent
root is followed (resolveInsideRoot doesn't EvalSymlinks). Asserted
as 'should resolve' so future hardening (if filepath.EvalSymlinks
is added) flips the test red and forces a coordinated update to the
dev-department subtree-composition pattern.
Why now: internal#77 RFC (dev-department extraction) selects symlink-
based composition over a future platform-level external: ref. These
tests pin the contract before the operator-side symlink convention
gets shipped, so a refactor or hardening of the resolver can't
silently break the production org-import path.
No production code changes. Pure additive test coverage.
Refs: internal#77 (Phase 3b verification — task #223)
Class B Hongming-owned CICD red sweep, e2e-api leg. Same substrate
hazard as PR #98 (handlers-postgres-integration) — Gitea act_runner
configures `container.network: host` operator-wide, so:
* Two concurrent e2e-api runs both attempted to bind `-p 15432:5432`
and `-p 16379:6379` on the operator host. Verified in run a7/2727
on 2026-05-07: `docker: Error response from daemon: Conflict. The
container name "/molecule-ci-redis" is already in use by container
af10f438...` — exit 125, job fails before any test runs.
* Hardcoded container names `molecule-ci-postgres` / `-redis` plus
the leading `docker rm -f` step meant a second job's startup also
KILLED the first job's still-running services.
Fix shape (mirrors PR #98 bridge-net pattern, adapted because the
platform-server is a Go binary on the host, not a containerised step):
1. Per-run unique container names: `pg-e2e-api-${RUN_ID}-${RUN_ATTEMPT}`,
`redis-e2e-api-${RUN_ID}-${RUN_ATTEMPT}`. Unique even across reruns
of the same run_id.
2. Ephemeral host port per run via `-p 0:5432` / `-p 0:6379` and
`docker port` lookup, exported as `DATABASE_URL` / `REDIS_URL` to
`$GITHUB_ENV`. No fixed host-port → no collision.
3. `127.0.0.1` (NOT `localhost`) in URLs — IPv6 first-resolve flake
fixed in #92 stays fixed.
4. `if: always()` cleanup so containers don't leak when test steps
fail.
Issue #94 items #2 + #3 also addressed:
* Pre-pull `alpine:latest` (provisioner uses it for ephemeral
token-write containers in `internal/handlers/container_files.go`).
* Idempotent `docker network create molecule-monorepo-net` (the
provisioner attaches workspace containers via that bridge —
`internal/provisioner/provisioner.go::DefaultNetwork`).
Issue #94 item #1 (timeouts) NOT bumped — recent log evidence shows
postgres ready in 3s, redis in 1s, platform in 1s when they DO come
up. Timeouts are not the bottleneck on the current substrate.
NOT addressed here (out of scope, separate change required):
* `Run E2E API tests` step has been failing on `Status back online`
because the platform's langgraph workspace template image
(`ghcr.io/molecule-ai/workspace-template-langgraph:latest`)
returns 403 Forbidden post-2026-05-06 GitHub org suspension. That
is a template-registry resolution issue (ADR-002 / local-build
mode) and belongs in a workspace-server change, not this workflow
file. This PR fixes the parallel-collision class and the workflow
setup hygiene; the langgraph-403 failure will still surface on
runs after this lands until template resolution is fixed
separately.
Verified manually on operator host 2026-05-08: docker now hands out
ephemeral ports on `-p 0:5432`, two parallel runs land on different
ports, both reach pg_isready GREEN.
Closes#94 (items #2 and #3; item #1 documented as not-bottleneck;
langgraph-template-403 referenced for follow-up).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Switches from services: block to --network molecule-monorepo-net with unique per-run container names. Avoids port-5432 collision when parallel Handlers-Postgres jobs run on host-network act_runner. Approved by security-auditor.
Class B verification — second consecutive green run to demonstrate the
fix isn't flaky.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Class B Hongming-owned CICD red sweep. The Handlers Postgres Integration
workflow has been silently failing on staging push and PRs ever since
#92 fixed the IPv6 flake — the IPv6 fix correctly pinned 127.0.0.1, but
unmasked a deeper issue: with our act_runner global container.network=host
config, multiple concurrent runs of this workflow each tried to bind
0.0.0.0:5432 on the operator host. The first wins; subsequent postgres
service containers exit with `FATAL: could not create any TCP/IP sockets`
+ `Address in use`. Docker auto-removes them (act_runner sets
AutoRemove:true), so by the time `Apply migrations` runs `psql`, the
container is gone — Connection refused, then `failed to remove container:
No such container` at cleanup time.
Per-job container.network override is silently ignored by act_runner
(`--network and --net in the options will be ignored.`), so we sidestep
`services:` entirely. The job container still uses host-net (required
for cache server discovery on the operator's bridge IP). We launch a
sibling postgres on the existing molecule-monorepo-net bridge with a
unique name per run (run_id+run_attempt) and connect via the bridge IP
read from `docker inspect`.
Verified manually on operator host 2026-05-08: 2× postgres on host-net
collides, but on the bridge with unique names + different IPs, both
succeed and each is reachable from a host-net job container.
Adds:
- always()-cleanup step so containers don't leak on test failure
- Diagnostic dump now includes the postgres container's docker logs
- Runbook at docs/runbooks/ documenting the substrate behavior + the
pattern future workflows should adopt for any `services:`-shaped need.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Class A red sweep — 3 first-tests timing out at the 5000ms default on the
self-hosted Gitea Actions Docker runner across 4 unrelated PRs (#82, #81,
#54, #53). The PRs share zero canvas/ surface — same 3 tests, same
cold-start signature, same shape on every run.
Root cause: `npx vitest run --coverage` cold-start cost (v8 coverage
instrumentation init + JSDOM bootstrap + heavy @/components/* and @/lib/*
import + first React render) consumes 5-7 seconds for the first
synchronous test in a heavyweight test file. Empirically:
ActivityTab "renders all 7 filter options" 5230ms (FAIL)
CreateWorkspaceDialog "opens the dialog ..." 6453ms (FAIL)
ConfigTab.provider "PUTs the new provider on Save" 5605ms (FAIL)
vs subsequent tests in the same files at 100-1500ms each. The component
code is correct (e.g. ActivityTab.FILTERS has 7 entries matching the
test). 1407 tests pass locally with --coverage in 9-15s; CI runs at 200s
under the same flag — the gap is import/transform/environment overhead,
not test logic.
Fix: CI-conditional `testTimeout: process.env.CI ? 30000 : 5000` in
canvas/vitest.config.ts. Local-dev sensitivity to genuine waitFor races
preserved; CI gets ~5x headroom over the worst observed first-test
(6453ms). Same shape Vitest documents at
<https://vitest.dev/config/testtimeout> and
<https://vitest.dev/guide/coverage#profiling-test-performance>.
Verification:
- Local: 5x runs of the 3 failing test files, all 74 tests green
(process.env.CI unset → 5000ms applies).
- Local: 7s sleep probe FAILS at 5000ms default and PASSES under
CI=true → ternary takes effect as written.
- Local: full canvas suite under CI=true with --coverage:
"Test Files 98 passed (98) | Tests 1407 passed (1407)".
Closes#96.
Refs: #82, #81, #54, #53.
Hostile self-review (3 weakest spots):
1. 30000ms is a guess, not a measurement. Mitigation: vitest still
emits per-test duration; a real 25s+ test will surface as a
duration regression and we dial down.
2. Doesn't fix the Docker-runner-overhead root-root-cause. True. That
is a multi-week perf project. The right trade today is unblocking 4
PRs from this single class.
3. Local-default of 5000ms means a real 8s race that flies on CI's
30000ms could pass without local sensitivity. Mitigation: dev-time
waitFor races are caught at the per-test level; suite-level cold-
start is the only legitimate >5s case here.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Conflicted files in .github/workflows/ taken from main:
.github/workflows/ci.yml
.github/workflows/e2e-staging-canvas.yml
.github/workflows/retarget-main-to-staging.yml
Conflicts arose from main advancing through PR #66/#79/#89 (CI workflow rewrites)
while staging hadn't picked up the changes yet. Main is the source of truth for
CI workflows; staging is downstream.
Co-authored-by: Claude (orchestrator)
Closes#88. Bundles localhost→127.0.0.1 + 2 other Gitea-act_runner flakes per feedback_gitea_actions_migration_audit_pattern. Approved by security-auditor.
The previous configs:-based fix (87b971a2) didn't actually fix the DinD
issue — Compose v2 falls back to bind mounts for `configs:` when swarm
mode is not active, so the resulting runc invocation still tries to
mount /workspace/.../cf-proxy/nginx.conf from the OUTER host filesystem
that the act_runner-vs-host-docker socket-mount can't see. Same
"not a directory" error returned.
Switch to a thin Dockerfile (cf-proxy/Dockerfile) that COPYs nginx.conf
into nginx:1.27-alpine. The build context is uploaded to the daemon as
a tarball, not bind-mounted from the host filesystem, so the path
translation gap doesn't apply. Verified locally: `docker build` +
`docker run cf-proxy nginx -T` reproduces the baked config end-to-end.
Trade-off: ~2-3s build cost on every harness up. Acceptable for the
Gitea CI gate; local-dev re-builds the image only when nginx.conf
changes (Docker layer cache).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Three workflows have been failing on every push to this Gitea repo for
GitHub-shaped reasons that don't translate to act_runner. Surfaced
while landing #84; bundled per `feedback_gitea_actions_migration_audit_pattern`
("bundle per-repo, not per-finding") instead of three separate PRs.
1) handlers-postgres-integration: localhost → 127.0.0.1
- lib/pq tries to dial localhost → ::1 first; the postgres service
container only listens on IPv4 → ECONNREFUSED → all
TestIntegration_* fail. Pin IPv4 to make the job deterministic.
2) pr-guards / disable-auto-merge-on-push: Gitea no-op
- The previous reusable-workflow caller invoked `gh pr merge
--disable-auto`, which calls GitHub's GraphQL API. Gitea returns
HTTP 405 on /api/graphql → step always fails. Inline the step so
it can detect Gitea (GITEA_ACTIONS=true OR repo url under
moleculesai.app) and no-op with a notice. Auto-merge gating is
moot on Gitea anyway: there's no `--auto` primitive being
touched. Job stays ALWAYS-RUN so branch protection's required
check still lands SUCCESS (avoids the SKIPPED-in-set trap from
`feedback_branch_protection_check_name_parity`).
3) Harness Replays: cf-proxy nginx.conf via docker `configs:` (not bind)
- act_runner runs the workflow inside a runner container; runc in
the docker daemon below resolves bind-mount source paths on the
OUTER host, not inside the runner. The path
`/workspace/.../cf-proxy/nginx.conf` is invisible there → "not a
directory" runc error. Switching to compose `configs:` packages
the file as content rather than a host bind, sidestepping the
DinD path-translation gap.
Local validation:
- YAML parsed clean for all 3 files.
- cf-proxy nginx.conf: standalone `docker compose run cf-proxy
nginx -T` reproduced the configs: mount end-to-end and dumped the
config correctly. The full harness compose still renders via
`docker compose config`.
Real-CI verification will land on this branch's first push.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The `HongmingWang-Rabbit/hermes-agent` fork is no longer reachable on
github.com (account suspended 2026-05-06). The patched fork now lives
at https://git.moleculesai.app/molecule-ai/hermes-agent. Same SHAs,
same branches — pure URL flip.
See molecule-ai/internal#72 for the github.com fork shell decision.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
actions/upload-artifact@v4+ and download-artifact@v4+ use the GHES 3.10+
artifact protocol that Gitea Actions (act_runner v0.6 / Gitea 1.22.x)
does NOT implement. Failure cite from PR #54 run 1325 jobs/2:
::error::@actions/artifact v2.0.0+, upload-artifact@v4+ and
download-artifact@v4+ are not currently supported on GHES.
Pinned all 3 references to v3.2.2 (latest v3) at SHA-pinned form for
supply-chain hygiene, matching the existing `uses:` style in this repo.
Affected workflows:
- ci.yml (Canvas Next.js coverage upload, blocks `CI / Canvas (Next.js)`
required check on every PR — was the merge-queue blocker for #53,
#54, #69, #71, #76, #81)
- e2e-staging-canvas.yml (Playwright report + screenshots on failure)
No download-artifact callers in the repo, so v3-pin doesn't compose-break
anywhere. Drop these pins post-Gitea-1.23+ when the v4 artifact protocol
ships, or migrate to a Gitea-native action.
Closes#210.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Class F of #75 sweep. /commits/{sha}/statuses replaces unavailable workflow-runs API. 4 mapping buckets verified against synthetic+real Gitea data. Approved by security-auditor.
Closes the chronic -race flake on TestPooledWithEICTunnel_PanicPoisonsEntry
and the handlers package as a whole (CI / Platform (Go) was intermittent
on staging, ~50% red on workspace-server-touching commits since 2026-04).
The race: tests swap the package-level poolJanitorInterval via t.Cleanup
(eic_tunnel_pool_test.go:61) AFTER an earlier test caused the global pool's
janitor goroutine to start. The janitor loops on time.NewTicker(poolJanitorInterval)
on every tick — so the cleanup write races the goroutine read for the rest
of the process. Caught locally + on PR #84's CI run on Gitea.
Fix: capture the interval as a field on eicTunnelPool at newEICTunnelPool().
The janitor now reads p.janitorInterval, which never changes after construction.
Tests that override poolJanitorInterval before freshPool() still get the new
value (they set the package var before construction). The global pool's
janitor — created lazily once via sync.Once on first getEICTunnelPool() —
is now immune to t.Cleanup-driven swaps from later tests.
Surfaced while verifying #84 (SaaS plugin install via EIC SSH); folded
into this PR per the "fix root not symptom" rule rather than merging
around a chronic-red CI signal.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two artifacts that unblock the parked follow-ups from #59:
1. scripts/edge-429-probe.sh (closes the "operator-blocked" status of
#62). An operator without CF/Vercel dashboard access can reproduce
a canvas-sized burst against a tenant subdomain and read each 429's
response shape — workspace-server bucket overflow (JSON body +
X-RateLimit-* headers) is distinguishable from CF (cf-ray) and
Vercel (x-vercel-id) by inspection of the report. Read-only,
parallel via background subshells (no GNU parallel dependency),
no credential use. Smoke-tested against example.com end-to-end.
2. docs/engineering/ratelimit-observability.md (closes the
"metric-blocked" status of #64). The existing
molecule_http_requests_total{path,status} counter + X-RateLimit-*
response headers already cover #64's acceptance criterion ("watch
metrics for two weeks"). The runbook collects the PromQL queries,
a decision tree for the re-tune (keep / per-tenant override /
change default), an alert rule template, and a hard "do not roll
ad-hoc per-bucket-key exposure" note (in-memory map includes
SHA-256 of bearer tokens — exposing it is a security review
surface, file a follow-up if needed).
Neither artifact changes runtime behaviour. Pure operational tooling.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the 🔴 docker-only row in docs/architecture/backends.md. Plugin
install on every SaaS tenant currently 503s with "workspace container
not running" because the handler is hardcoded to Docker exec but SaaS
workspaces live on per-workspace EC2s. Caught on hongming.moleculesai.app
when canvas POST /workspaces/<id>/plugins surfaced the error.
Mirrors the Files API PR #1702 pattern: dispatch on workspaces.instance_id
in deliverToContainer (and Uninstall). When set, push the staged plugin
tarball to the EC2 over the existing withEICTunnel primitive
(template_files_eic.go) and unpack into the runtime's bind-mounted config
dir (/configs for claude-code, /home/ubuntu/.hermes for hermes — see
workspaceFilePathPrefix). chown 1000:1000 to match the docker path's
agent-uid contract; restart via the existing dispatcher.
Direct host write rather than docker-cp via SSH because the runtime's
config dir is already bind-mounted into the workspace container — the
runtime sees the files on next start with no additional plumbing.
Adds InstanceIDLookup (parallel to RuntimeLookup) so unit tests don't
need a DB; production wires it in router.go like templates.go does.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Empty commit to trigger CI a second consecutive time per the SOP
'verify ≥1 representative workflow per class via workflow_dispatch
or push event ... ≥2 consecutive successful runs per class'.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Part of the post-#66 sweep to remove `gh` CLI dependencies that fail
silently against Gitea. Class F covers `gh run list --workflow=X
--commit=SHA` shapes — querying whether a specific workflow ran (and
how it finished) for a specific SHA.
Why this is the only call site in class F:
`gh run list` hits GitHub's `/repos/.../actions/runs` REST endpoint.
Gitea exposes ZERO endpoints under `/repos/.../actions/runs` —
verified 2026-05-07 via swagger inspection: only secrets, variables,
and runner-registration tokens live under /actions/. There's no way
to query workflow run state via the Gitea v1 API directly.
However, every Gitea Actions job DOES emit a commit status with
`context = "<Workflow Name> / <Job Name> (<event>)"` (verified
2026-05-07 by reading /repos/.../commits/{sha}/statuses on a recent
main SHA). That surface is exactly what we need: each workflow run
leg is one status row, the aggregate state encodes the run outcome,
and Gitea exposes it under `/api/v1/repos/.../commits/{sha}/statuses`
which IS available.
Affected:
`auto-promote-on-e2e.yml` (lines 172-180):
Old: `gh run list --workflow e2e-staging-saas.yml --commit $SHA
--json status,conclusion --jq ...` returning a 5-bucket string
like `completed/success` | `in_progress/none` | `none/none` |
`completed/failure` | `completed/cancelled`.
New: `curl /api/v1/repos/.../commits/$SHA/statuses` + jq filter on
contexts whose name starts with
`"E2E Staging SaaS (full lifecycle) /"`. Mapping:
0 matched contexts → "none/none" (E2E paths-
filtered out
— same as
before)
any context = pending → "in_progress/none" (defer)
any context = error|failure → "completed/failure" (abort)
all contexts = success → "completed/success" (proceed)
The `completed/cancelled` arm of the case statement becomes
unreachable: Gitea status API doesn't expose a `cancelled` state
(it has success/failure/error/pending/warning), so per-SHA
concurrency cancellations now surface as `failure` and are handled
by the failure branch. Documented in-place; the cancelled arm is
kept as defense-in-depth for any future dual-host operation.
Verification:
- Live curl against the current main SHA returns `none/none` (E2E
was paths-filtered for that change set — expected).
- Synthetic-input jq tests verify all four mapping buckets:
no contexts → "none/none"
one context = pending → "in_progress/none"
success + success → "completed/success"
success + failure → "completed/failure"
- YAML syntax validates.
Token: continues to use act_runner's GITHUB_TOKEN (per-run, repo
read scope). The `/commits/{sha}/statuses` endpoint is repo-scoped,
no extra perms needed.
Closes part of #75. Master tracking issue at #75; companion PRs:
#80 (class A — `gh pr ...`), #81 (class D — `gh api ...`).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Hostile-self-review weakest-spot #2: if the devops-engineer persona
is ever renamed, the canary will go red even if everything else is
fine. Add an inline comment pointing the next editor at both files
that must update together (auto-sync-main-to-staging.yml's git
config + this canary's EXPECTED_PERSONA + the staging branch
protection's push_whitelist_usernames).
No behaviour change — comment-only.
While verifying Phase 4, found a real flaw in Probe 3 (`git ls-remote
refs/heads/staging`). On a public repo (which molecule-core is), Gitea
falls back to anonymous read on bad auth, so `ls-remote` succeeds even
with a junk token. The probe was therefore green-lighting rotated
tokens — false-green, the worst possible canary failure mode.
Rewritten to use `git push --dry-run` of the current staging SHA back
to `refs/heads/staging`:
- Push always authenticates (auth-gated on smart-protocol handshake,
before the dry-run can compute the empty-diff).
- NOP by construction: pushing the current tip back to itself is
"Everything up-to-date" with exit 0.
- Bad token → "Authentication failed", exit 128.
- Doesn't reach pre-receive (where branch-protection authz runs), so
scope is "auth only" — matches the design intent (failure mode B);
authz already covered daily by branch-protection-drift.yml.
Implementation note: `git push` requires a local repo. Spinning up a
fresh `git init` in a tempdir (~1KB, ~50ms) instead of pulling the
full repo via actions/checkout — actions/checkout would clone
~hundreds of MB for what amounts to "a place to run git from."
Local mutation tests pass:
- Real token: "Everything up-to-date" exit 0
- Junk token: "Authentication failed" exit 128 with actionable
::error:: messages pointing at the runbook
Header comment + runbook step-mapping updated to reflect new probe
shape. Refs: #72
Part of the post-#66 sweep to remove `gh` CLI dependencies that fail
silently against Gitea (which exposes /api/v1 only — no GraphQL → 405,
no /api/v3 → 404). Class A covers `gh pr list / view / diff / comment`
shapes.
Affected:
- `.github/workflows/auto-tag-runtime.yml`
Replaced `gh pr list --search SHA --json number,labels` with a curl
to `/api/v1/repos/.../pulls?state=closed&sort=newest&limit=50` +
jq filter on `merge_commit_sha == github.sha`. Same end-to-end
behaviour: locate the merged PR for this push, read its labels,
pick the bump kind. Defensive `?.name // empty` jq guard handles
unlabelled PRs without erroring. The 50-PR window is comfortably
larger than the volume of staging→main promotes that close in any
reasonable detection window.
- `scripts/check-stale-promote-pr.sh`
Rewrote `fetch_prs` and `post_comment` to call Gitea's REST API
directly. Gitea doesn't expose GitHub's compound `mergeStateStatus`
/ `reviewDecision` fields, so the new fetcher pulls
`/pulls?state=open&base=main` then for each PR pulls
`/pulls/{n}/reviews` and synthesizes the GitHub-shape JSON the rest
of the script (and the existing fixture-based unit tests) consume:
BLOCKED + REVIEW_REQUIRED ↔ mergeable=true AND 0 APPROVED reviews
DIRTY ↔ mergeable=false (alarm doesn't fire)
CLEAN + APPROVED ↔ mergeable=true AND ≥1 APPROVED review
Comment-posting moves to `POST /repos/.../issues/{n}/comments`
(Gitea treats PRs as issues for the comment surface, same as
GitHub's REST). All 23 fixture-driven unit tests still pass —
fixtures pass GitHub-shape JSON via PR_FIXTURE which short-circuits
the live fetch path.
- `scripts/ops/check_migration_collisions.py`
Replaced `gh pr list` + `gh pr diff` calls with stdlib `urllib`
against /api/v1. Helper `_gitea_get` centralizes auth + error
handling; uses GITEA_TOKEN env, falling back to GITHUB_TOKEN
(act_runner) and GH_TOKEN. Return shape from
`open_prs_with_migration_prefix` mimics the historical
`--json number,headRefName` so the call sites are unchanged. All 9
regex-classifier unit tests still pass; live integration test
against the production Gitea API returns 0 collisions for prefix=999
as expected.
curl invocation pattern is `curl --fail-with-body -sS` (NOT `-fsS` —
the two short-fail flags are mutually exclusive in modern curl;
caught by `curl: You must select either --fail or --fail-with-body,
not both` during local verification).
Token model: workflows pass act_runner's GITHUB_TOKEN (per-run, repo
read scope) — same surface used by the auto-sync fix in PR #66 plus
the surrounding workflows. No new repo secrets required.
Verification: bash unit tests (23/23 pass), python unittest (9/9 pass),
live curl call against production Gitea returns 200 with the expected
shape, YAML / shell / Python syntax all validate.
Closes part of #75. Other classes (D — `gh api`; F — `gh run list`)
land in follow-up PRs.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Root cause: same as #65/#73 — gh CLI calls Gitea GraphQL
(/api/graphql) which returns HTTP 405. Specifically:
- gh api -X PATCH /pulls/{N} sometimes works but is flaky on
Gitea (depends on gh's host-resolution layer)
- gh pr close / gh pr comment route through GraphQL → 405
Fix: replace all gh calls with direct curl REST calls to Gitea:
- PATCH /api/v1/repos/{owner}/{repo}/pulls/{index} body
{"base": "staging"} — retarget the PR base
- POST /api/v1/repos/{owner}/{repo}/issues/{index}/comments —
post the explainer comment (PRs are issues in Gitea, comments
share the issue endpoint)
- PATCH /api/v1/repos/{owner}/{repo}/pulls/{index} body
{"state": "closed"} — close redundant PR for #1884 case
Identity: switch from secrets.GITHUB_TOKEN (per-job ephemeral,
narrow scope on Gitea) to secrets.AUTO_SYNC_TOKEN (devops-engineer
persona). Same persona used by auto-sync (#66) and auto-promote
(#78). Per feedback_per_agent_gitea_identity_default. PR-edit and
comment do not need branch-protection bypass.
Curl-status-capture pattern hardened per
feedback_curl_status_capture_pollution: http_code via -w to its
own scalar, body to a tempfile, set +e/-e bracket so curl's
non-zero-on-4xx doesn't pollute the script's exit chain.
Header comment block fully rewritten with 4 failure-mode runbooks
(A: 422 dup-base, B: token rotated, C: PR deleted, D: filter
mis-fire) per PR #66/#78's pattern.
Refs: #65, #74, #196, PR #66 + #78 (canonical reference)
Closes#74
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The two API probes used the unsafe shape rejected by
lint-curl-status-capture.yml (per feedback_curl_status_capture_pollution):
status=$(curl ... -w '%{http_code}' ... || echo "000")
When curl exits non-zero (transport error, --fail-with-body 4xx/5xx),
the `-w` already wrote a code; the `|| echo "000"` then APPENDS another
"000", yielding "000000" or "409000" — passes shape checks while looking
right.
Switch to the canonical safe shape (set +e + tempfile + cat):
set +e
curl ... -w '%{http_code}' >code_file 2>/dev/null
set -e
status=$(cat code_file 2>/dev/null || true)
[ -z "$status" ] && status="000"
Inline comment in both probe steps explains the lint constraint so
the next editor doesn't re-introduce the bad pattern.
Refs: #72, lint failure on PR #77 (1/22 red → 22/22 expected green)
Root cause: same as #65/PR-#66 — gh CLI calls Gitea GraphQL
(/api/graphql) which returns HTTP 405. Additionally, gh workflow
run calls /actions/workflows/{id}/dispatches which does not
exist on Gitea 1.22.6 (verified via swagger.v1.json).
Fix:
- Replace gh run list with Gitea REST combined-status endpoint
(GET /repos/{owner}/{repo}/commits/{ref}/status). Combined state
encodes the AND across every check context — simpler than the
per-workflow loop and immune to workflow-name collisions.
- Replace gh pr create / merge --auto with direct curl calls to
POST /pulls and POST /pulls/{N}/merge with merge_when_checks_succeed.
- Remove the post-merge polling tail entirely. The GitHub-era
GITHUB_TOKEN no-recursion rule does not apply on Gitea Actions
(verified empirically: PR #66 merge fired downstream pushes
naturally). Even if we wanted to dispatch, Gitea has no
workflow_dispatch REST endpoint.
Critical constraint: main has enable_push: false with no whitelist;
direct push is impossible for any persona. PR-mediated merge is the
only path. main has required_approvals: 1 — auto-merge waits for
Hongming's approval before landing, preserving the
feedback_prod_apply_needs_hongming_chat_go contract.
Identity: AUTO_SYNC_TOKEN (devops-engineer persona). Not founder PAT.
Per feedback_per_agent_gitea_identity_default. Same persona used by
auto-sync (PR #66) — keeps identity model coherent.
Header comment block fully rewritten with 4 failure-mode runbooks
(A: gates not green, B: PR-create non-201, C: merge schedule fails,
D: token rotated/scope wrong) per PR #66's pattern.
Refs: #65, #73, #195, PR #66 (canonical reference)
Closes#73
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds a 6h-cron synthetic check that fires the auth surface used by
auto-sync-main-to-staging.yml (PR #66) and emits a red workflow
status when AUTO_SYNC_TOKEN has drifted out of validity. Closes
hostile-self-review weakest-spot #3 from PR #66 (token-rotation
detection latency).
Read-only verification — no writes, no synthetic merge commits, no
canary branch noise. Three probes:
1. GET /api/v1/user → token authenticates as devops-engineer
2. GET /api/v1/repos/molecule-ai/molecule-core → read:repository scope
3. git ls-remote refs/heads/staging → exact HTTPS auth path used by
actions/checkout in the real auto-sync workflow
Hard-fail on missing AUTO_SYNC_TOKEN secret on both schedule and
workflow_dispatch — per feedback_schedule_vs_dispatch_secrets_hardening,
a silent soft-skip would make the canary itself drift-invisible (the
sweep-cf-orphans #2088 lesson). Operator runbook in workflow header.
Token reuse: same AUTO_SYNC_TOKEN as the workflow under monitor; no
new credential introduced. Read-only paths only.
Refs: #72, hostile-self-review #66
Stage 3 of #61 (final stage). Replaces the 5s setInterval poll with:
1. Initial bootstrap on mount + on filter-change + on workspaceId-
change (preserved from existing useEffect on loadActivities).
2. Manual Refresh button (preserved — still triggers loadActivities).
3. useSocketEvent subscription to ACTIVITY_LOGGED — every event
for THIS workspace prepends to the list, gated on the user's
autoRefresh toggle and current filter selection.
No interval poll. Steady-state HTTP traffic from this tab drops from
12 req/min (5s × 1 active workspace) to 0 outside of bootstraps and
manual refreshes. Live update latency drops from up to 5s to ~10ms.
The autoRefresh ("Live" / "Paused") toggle now gates LIVE updates
instead of polling cadence — semantically the same (paused = list
stays frozen), implementationally simpler.
The filter selection is honoured by the WS handler so a user
filtering to "Tasks" doesn't see live a2a_send rows trickle in. Same
shape the server-side `?type=<filter>` enforces on the bootstrap.
Test changes:
- 27 existing tests pass unchanged (filter / autoRefresh /
Refresh / loading / error / empty / count / row-content all
preserved)
- 7 new WS-subscription tests:
- WS push for matching workspace prepends with NO HTTP call
- WS push for different workspace ignored
- WS push respects active filter (non-matching ignored)
- WS push respects active filter (matching renders)
- WS push while autoRefresh paused ignored
- WS push for already-in-list row deduped (no double-render)
- NO 5s interval polling after mount
Mutation-tested:
- drop workspace_id filter → "different workspace" test fails
- drop autoRefresh gate → "paused" test fails
- drop filter gate → "non-matching activity_type" test fails
- drop dedup-by-id → "already in list deduped" test fails
Full canvas suite: 1396 passing, 0 failing. tsc clean.
No API or schema change. /workspaces/:id/activity HTTP endpoint
stays — used for bootstrap + manual refresh + filter-change reload.
ACTIVITY_LOGGED event shape unchanged.
Hostile self-review (three weakest spots):
1. Server-side activity_logs row UPDATES (status flips, etc.) are
not reflected post-#61 — the dedup-by-id check skips a re-fired
ACTIVITY_LOGGED for an existing row. Acceptable: activity_logs
is append-only by design (audit trail); status updates surface
as new task_update rows, not as in-place mutations. If a future
server change adds in-place updates, fire ACTIVITY_UPDATED as a
distinct event so this dedup logic stays simple.
2. WS handler is recreated on every render (filter / autoRefresh /
workspaceId state changes). useSocketEvent's ref-based pattern
keeps the bus subscription stable, but the handler closure
re-captures each render. Side effect: fine — handler call cost
is negligible.
3. The "error" filter matches activity_type === "error" (mirrors
server semantics). It does NOT match status === "error" rows
of other activity types — same as the polling version. Worth
re-evaluating in a separate PR if users expect the broader
semantic.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Stage 2 of #61. Replaces the 60s setInterval poll that fanned out
across every visible workspace fetching `?type=delegation&limit=500`
with:
1. One bootstrap fan-out on mount (or on visible-ID-set change),
same shape as before — preserves the 60-min look-back history.
2. useSocketEvent subscription to ACTIVITY_LOGGED — every event
with activity_type=delegation + method=delegate from a visible
workspace appends to a local rolling buffer, edges are re-derived
via the existing buildA2AEdges helper.
3. showA2AEdges toggle off: clears edges + buffer.
No interval poll. The visibleIdsKey selector gate that fixed the
2026-05-04 render-loop incident is preserved — peer-discovery /
status-flip writes still don't trigger a wasteful re-bootstrap.
Steady-state HTTP traffic from this overlay drops from N req/min
(N visible workspaces × 1 cycle/min) to 0 outside of mount + visible-
ID-set-change bootstraps. Live update latency drops from up to 60s
to ~10ms.
Bootstrap race-aware: any WS arrivals that landed in the buffer
during the fetch await are preserved by id-dedup-with-fetched-first
ordering. No row is double-counted; no row is lost during in-flight
updates.
Test changes:
- 27 existing tests pass unchanged (buildA2AEdges purity preserved,
component visibility/visibleIdsKey/error-swallow behaviour
preserved).
- 6 new WS-subscription tests:
- NO 60s polling after bootstrap (clock advance fires nothing)
- WS push for delegation updates edges with NO HTTP call
- WS push for non-delegation activity_type ignored
- WS push for delegate_result ignored (mirrors buildA2AEdges
method filter)
- WS push from hidden workspace ignored
- WS push while showA2AEdges=false ignored
Mutation-tested:
- drop activity_type filter → "non-delegation" test fails
- drop method===delegate filter → "delegate_result" test fails
- drop visible-ws membership filter → "hidden workspace" test fails
Full canvas suite: 1395 passing, 0 failing. tsc clean.
No API or schema change. ACTIVITY_LOGGED event shape unchanged.
The /workspaces/:id/activity HTTP endpoint stays — used for bootstrap.
Hostile self-review (three weakest spots):
1. Bootstrap fetches up to 500 rows × N workspaces. Worst-case
buffer ~3000 entries before window-prune. Acceptable: window-
prune runs on every recomputeAndPush, buildA2AEdges aggregates
to at most N² edges. Real-world usage stays well under both.
2. WS handler re-arms on every bootstrap dependency change
(visibleIds change). useSocketEvent's ref-based pattern means
the bus subscription stays stable across renders, but the
handler closure re-captures bootstrap each time. Side effect:
fine — handler invocation just calls recomputeAndPush which is
idempotent.
3. delegate_result rows arriving over WS are silently dropped.
Acceptable: the existing buildA2AEdges already filters them out
at aggregation time (avoids double-counting); pre-filtering at
the WS handler is the correct mirror — keeps the bus path and
the bootstrap path consistent.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
OSS contributors who clone molecule-core and `go run ./workspace-server/cmd/server`
now get a working end-to-end provision without authenticating to GHCR or AWS ECR.
Pre-fix: with MOLECULE_IMAGE_REGISTRY unset, the provisioner attempted to pull
ghcr.io/molecule-ai/workspace-template-<runtime>:latest, which has been
returning 403 since the 2026-05-06 GitHub-org suspension.
Post-fix: when MOLECULE_IMAGE_REGISTRY is unset, the provisioner switches to
local-build mode — looks up the workspace-template-<runtime> repo's HEAD sha
on Gitea via a single API call, shallow-clones into ~/.cache/molecule/, and
runs `docker build --platform=linux/amd64`. SHA-pinned cache key skips the
clone+build entirely on subsequent provisions.
Production tenants are unaffected: every prod tenant sets the var to its
private ECR mirror, so the SaaS pull path is byte-for-byte identical.
SSOT for mode detection lives in Resolve() (registry_mode.go) returning a
discriminated RegistrySource{Mode, Prefix} so call sites that branch on
mode get a compile-time push instead of a string-equality footgun.
Coverage:
* registry_mode.go — new SSOT (Resolve, RegistryMode, IsKnownRuntime)
* registry_mode_test.go — 8 tests pinning mode-decision contract
* localbuild.go — clone+build pipeline (570 LOC, fully unit-tested)
* localbuild_test.go — 22 tests covering happy/sad paths, fail-closed
* provisioner.go — Start() inserts ensureLocalImageHook in local mode
* docs/adr/ADR-002 — design rationale + alternatives + security review
* docs/development/local-development.md — local-build flow + env overrides
Security:
* Allowlist-only runtime names (knownRuntimes) gate the clone path.
* Repo prefix hardcoded to git.moleculesai.app/molecule-ai/molecule-ai-workspace-template-;
forks via opt-in MOLECULE_LOCAL_TEMPLATE_REPO_PREFIX.
* MOLECULE_GITEA_TOKEN masked in every log line via maskTokenInURL/maskTokenInString.
* Fail-closed: Gitea unreachable / runtime not mirrored → clear error, never
silently fall back to GHCR/ECR.
* docker build invocation passes no --build-arg from external input.
* HTTP body cap 64KB on Gitea API responses (defence vs malicious upstream).
Closes#63 / Task #194.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Stage 1 of #61. Replaces the 30s setInterval poll with:
1. One bootstrap fan-out on mount (cap of 3 retained from the
2026-05-04 fix), gives the initial recent-comms window without
waiting for live events.
2. useSocketEvent subscription to ACTIVITY_LOGGED — every event
with a comm-overlay-relevant activity_type from a visible online
workspace prepends to the rendered list.
3. Re-bootstrap on visibility-toggle re-open so the snapshot is
fresh after a long collapsed period.
No interval poll. Inherits the singleton ReconnectingSocket's
reconnect / backoff / health-check guarantees via useSocketEvent.
Steady-state HTTP traffic from this overlay drops from ~6 req/min
(3 ws × 2 cycles/min) to 0 outside of mount/visibility-toggle
bootstraps. Live updates arrive within ~10ms of the server insert
instead of after up to 30s.
Test changes:
- Bootstrap fan-out cap of 3 — kept (was the cadence test's role
pre-#61)
- 30s cadence test — replaced with "no interval polling" test
that pins the absence of any cadence-driven HTTP after bootstrap
- Visibility gate test — extended to verify both: no fetches while
closed, AND re-bootstrap on re-open
- WS subscription tests (new):
- WS push extends rendered list with NO HTTP call
- WS push for offline workspace ignored
- WS push for non-comm activity_type ignored
- WS push while collapsed ignored
- non-ACTIVITY_LOGGED events ignored
Mutation-tested:
- drop visibility gate → visibility test fails
- drop activity_type filter → "non-comm activity_type" test fails
- drop workspace online-set filter → "offline workspace" test fails
Full canvas suite: 1393 passing, 0 failing. tsc clean.
No API or schema change. ACTIVITY_LOGGED event shape pinned by
existing socket-events tests.
Hostile self-review (three weakest spots):
1. Sustained WS outage shows stale comms until visibility-toggle
re-bootstrap. Acceptable: the singleton socket already auto-
reconnects and the comm overlay isn't a critical-path surface.
2. Bootstrap on visibility-toggle costs another 3 HTTP calls each
re-open. Acceptable: visibility-toggle is a deliberate user
action, not a tight loop.
3. The WS handler reads the latest `nodes` via nodesRef rather
than re-subscribing on node changes. By design — the bus
listener stays bound for the component lifetime to avoid the
"tear-down storm" pattern A2ATopologyOverlay's comment warns
about (ref-based current-state lookup, stable subscription).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Root cause of `Auto-sync main → staging / sync-staging (push)`
failing every push to main since the GitHub→Gitea migration:
The workflow assumed a GitHub `merge_queue` ruleset on staging
(blocking direct push) and used `gh pr create` + `gh pr merge
--auto` to land sync via the queue. On Gitea this fails at the
`gh pr create` step with `HTTP 405 Method Not Allowed
(https://git.moleculesai.app/api/graphql)` — Gitea exposes no
GraphQL endpoint, and the GitHub-CLI cannot ship PRs against
Gitea.
Verified failure mode in run 1117/job 0 (token logs at
/tmp/log2.txt, run target /molecule-ai/molecule-core/actions/
runs/1117/jobs/0). The merge step succeeded and pushed
auto-sync/main-1e1f4d63; the PR step failed with the 405. So
every main push left an orphan auto-sync/* branch and a red CI
status, with no PR to land it.
Fix: the staging branch protection on Gitea
(`enable_push: true`, `push_whitelist_usernames:
[devops-engineer]`) already permits direct push from the
devops-engineer persona. Drop the entire merge-queue PR
architecture and replace with:
1. Checkout staging with secrets.AUTO_SYNC_TOKEN
(devops-engineer persona token, NOT founder PAT —
`feedback_per_agent_gitea_identity_default`).
2. `git fetch origin main` + ff-merge or no-ff merge.
3. `git push origin staging` directly.
The AUTO_SYNC_TOKEN repo secret already exists (created
2026-05-07 14:00 alongside the staging push_whitelist update).
Workflow name + job name unchanged → required-check name
`Auto-sync main → staging / sync-staging (push)` keeps the
same context, no branch-protection edits needed.
Rejected alternatives (documented in workflow header):
- Reuse PR architecture via Gitea REST: ~80 LOC of API
plumbing for no benefit; direct push works.
- GH_HOST=git.moleculesai.app: still calls /api/graphql,
same 405; doesn't fix the root issue.
- Custom JS action: external dep for a 5-line `git push`.
Header comment in the workflow now documents:
- What this workflow does (SSOT for staging advancing).
- Why direct push (GitHub merge_queue → Gitea push_whitelist).
- Identity and token (anti-bot-ring per saved memory).
- Failure modes A–D with operator runbook for each.
- Loop safety (push to staging doesn't fire push:main → no
recursion).
Verification plan: this fix-PR's merge to main is itself the
trigger; watch the workflow run on the merge commit and on
one follow-up trigger commit, expect both green.
Refs: failing run https://git.moleculesai.app/molecule-ai/
molecule-core/actions/runs/1117/jobs/0
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The previous comment said "all share one IP bucket" — accurate before
the keyFor refactor, slightly stale after it. The dev-mode rationale
(bucket fills fast, blanks the page on a single-user dev box) is
unchanged; only the bucket-key flavour text needed updating.
Doc-only follow-up from #60's hostile self-review #3. No behavior change.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes#59.
Symptom: /workspaces/:id/activity returns 429 with rate-limit-exceeded
on hongming.moleculesai.app whenever multiple workspaces are visible
in the canvas. Single-tab, single-user, well within the documented
600 req/min budget — but every request collapsed into one bucket.
Root cause: workspace-server's RateLimiter keyed buckets on
c.ClientIP(). After issue #179 turned off proxy-header trust
(SetTrustedProxies(nil), correctly closing the XFF spoofing hole),
c.ClientIP() returns the TCP RemoteAddr — which in production is the
upstream proxy (Caddy on per-tenant EC2; CP/Vercel on the SaaS plane).
Every browser tab + every canvas consumer + every poll loop for every
tenant collapsed into one bucket.
Fix: bucket key derivation moves into a single keyFor helper that
mirrors the SSOT pattern of:
- molecule-controlplane/internal/middleware/ratelimit.go (org > user > IP)
- this package's own MCPRateLimiter (token-hash via tokenKey)
Priority: X-Molecule-Org-Id header → SHA-256(Authorization Bearer)
→ ClientIP. Token values are kept hashed in the bucket map so the
in-memory state can't become a token dump.
Tests:
- TestKeyFor_OrgIdHeaderTrumpsBearerAndIP — priority order
- TestKeyFor_BearerTokenWhenNoOrgId — middle tier + raw-token leak pin
- TestKeyFor_IPFallbackWhenNoOrgIdNoBearer — anon probe path
- TestRateLimit_TwoOrgsSameIP_IndependentBuckets — load-bearing
regression (issue #59) — two tenants behind same upstream proxy
must not share a bucket
- TestRateLimit_TwoTokensSameIP_IndependentBuckets — same shape
for the per-tenant Caddy box
- TestRateLimit_SameOrgDifferentTokens_SharedBucket — counter-pin:
rotating tokens within one org must NOT bypass the org's quota
- TestRateLimit_Middleware_RoutesThroughKeyFor — AST gate, mirrors
the SSOT gates established in #36/#10/#12
Mutation-tested:
- strip org-id branch in keyFor → 3 tests fail
- strip bearer-token branch → 2 tests fail
- reintroduce direct c.ClientIP() in Middleware → 3 tests fail
(including the AST gate)
Existing tests pass unchanged: dev-mode fail-open, X-RateLimit-*
headers (#105), Retry-After on 429 (#105), XFF anti-spoofing (#179).
No schema/API change. 429 response body and X-RateLimit-* headers
unchanged. RATE_LIMIT env var semantics unchanged.
Hostile self-review (three weakest spots) is in the issue body:
1. one-shot Docker-inspect cost is now bucket-key derivation cost
(string compare + SHA-256 of bearer); single-digit microseconds.
2. X-Molecule-Org-Id is unvalidated at the rate-limiter layer —
spoofing is closed by tenant SG + CP front; documented in
keyFor's docstring with the conditions under which to revisit.
3. cpProv-style SaaS surface is out of scope; CP's own limiter
handles that hop.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Audit finding: every workflow that emits a required-status-check name
on molecule-core's branch protection (apply.sh's STAGING_CHECKS +
MAIN_CHECKS) ALREADY uses the safe always-runs-with-conditional-steps
shape — Platform/Canvas/Python/Shellcheck in ci.yml, Canvas tabs E2E
in e2e-staging-canvas.yml, E2E API Smoke in e2e-api.yml, PR-built
wheel in runtime-prbuild-compat.yml, the codeql Analyze matrix, and
the always-on Secret scan + Detect changes. No production drift to
fix today.
Adds a regression-guard so the next path-filter / matrix refactor /
workflow rename can't silently re-introduce the bug shape called out
in saved memory feedback_branch_protection_check_name_parity:
"Path filters … silently break branch protection because no job
emits the protected sentinel status when path-filter returns false."
New tools:
- tools/branch-protection/check_name_parity.sh — extracts every
required check name from apply.sh's heredocs, then for each name
classifies the owning workflow as safe (no top-level paths:) /
safe (per-step if-gates without top-level paths:) / unsafe
(top-level paths: without per-step if-gates) / unsafe-mix
(top-level paths: WITH per-step if-gates — the workflow may still
skip entirely on path exclusion, leaving the gates dormant) /
missing (no emitter at all). Special-cases codeql.yml's matrix-
expanded `Analyze (${{ matrix.language }})`.
- tools/branch-protection/test_check_name_parity.sh — 6 unit tests
covering each classification: safe, unsafe-path-filter, missing,
safe-with-per-step-gates, unsafe-mix, matrix-expansion. Each test
builds a synthetic apply.sh + workflow file in a tmpdir, invokes
the script, and asserts on exit code + stderr substring. Per
feedback_assert_exact_not_substring the assertions pin specific
classifications, not just non-zero exit.
Wired into branch-protection-drift.yml so every PR touching
.github/workflows/** runs the parity check; the existing daily
schedule covers between-PR drift. The check is cheap (~1s) and runs
without the admin token — only reads files in the checkout. Self-
test step runs the unit tests on every invocation, so a regression
in the script can't false-pass on production.
Per BSD-vs-GNU portability hygiene: heredoc-marker extraction stays
in plain awk + sed (no gawk-only `match()` array form), grep regex
avoids `^` anchor for `if:` lines because real workflows use
` - if:` with the `-` step-marker between leading spaces and
`if:` (the original anchor missed every workflow's per-step gates).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the post-Task-#176 self-review gap: the bearer-token + tenant-
slug header construction was duplicated across 7 raw-fetch callsites
in the canvas (lib/api.ts request(), uploads.ts × 2, and 5 Attachment*
components). Each callsite read NEXT_PUBLIC_ADMIN_TOKEN, attached
Authorization: Bearer manually, computed getTenantSlug locally
(three of them inline-redefined it from /lib/tenant!), and attached
X-Molecule-Org-Slug. A new poller / raw-fetch added without going
through this exact recipe silently 401s against workspace-server when
ADMIN_TOKEN is set on the server side — the bug shape called out in
the original task.
Adds platformAuthHeaders() to lib/api.ts as the single source of truth
and routes all 7 raw-fetch callsites through it. Removes 4 duplicate
local getTenantSlug() copies (Image, Video, Audio, PDF, TextPreview)
that were inline-redefining what /lib/tenant.ts already exports.
Also preserves the AttachmentTextPreview off-platform branch — when
isPlatformAttachment() is false, headers is {} (no bearer leakage to
third-party URLs).
Tests:
- 6 unit tests in platform-auth-headers.test.ts covering: empty,
bearer-only, slug-only, both, empty-string-as-unset, fresh-object-
per-call. Mutation-tested: removing the bearer attach inside the
helper fails 2 of 6 tests immediately.
- All 1389 existing canvas vitest tests pass unchanged.
- npx tsc --noEmit clean.
- npm run build succeeds (canvas Next.js build).
Per feedback_assert_exact_not_substring: tests use exact toEqual()
equality, not substring/contains, so an extra-header bug also fails
the assertion. Per feedback_oss_design_philosophy: this is the
"plugin/abstract/modular/SSOT" move applied to the auth-header
construction surface — one helper, six call sites, no duplication.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Why
---
PR #35 marked `continue-on-error: true` at the JOB level (correct YAML),
but Gitea Actions 1.22.6 does NOT propagate job-level continue-on-error
to the commit-status API — every matrix leg still posts `failure`. That
keeps OVERALL=failure on every push to main + staging and blocks the
auto-promote signal even when every other gate is green.
Worse: the underlying CodeQL run never actually worked on Gitea. The
github/codeql-action/init@v4 step calls api.github.com bundle endpoints
(CLI download + query packs + telemetry) that Gitea does NOT proxy.
Confirmed via live-tested run 1d/3101 on operator host:
2026-05-07T20:55:17 ::group::Run Initialize CodeQL
with: languages: ${{ matrix.language }}
queries: security-extended
2026-05-07T20:55:36 ::error::404 page not found
2026-05-07T20:55:50 Failure - Main Initialize CodeQL
2026-05-07T20:55:51 skipping Perform CodeQL Analysis (main skipped)
2026-05-07T20:55:51 :⚠️:No files were found at sarif-results/go/
The SARIF artifact upload was already a no-op (warning above) — the
analyze step never wrote anything because init failed. So nothing of
value is being lost by stubbing this out.
What
----
- Convert the workflow to a single-step stub that emits success per
matrix language (go, javascript-typescript, python).
- Keep workflow `name: CodeQL` exactly (auto-promote-staging.yml
line 67 keys on it as a workflow_run gate).
- Keep job name template `Analyze (${{ matrix.language }})` and the
3-leg matrix exactly (commit-status context names + branch
protection + #144 required-check-name parity).
- Keep all four triggers (push / pull_request / merge_group /
schedule) so merge_group required-checks parity holds.
- Drop the codeql-action steps, the Autobuild step, the SARIF parse
step, and the upload-artifact step — all four of those are now
dead code (init can never succeed against Gitea's API surface).
Policy
------
Per Hongming decision 2026-05-07 (#156): CodeQL is ADVISORY, not
blocking, until a Gitea-compatible SAST pipeline lands. The header
of the new workflow file documents this decision + lists the three
re-enable options (self-hosted Semgrep, Sonatype, GitHub mirror)
plus the compensating controls in place (secret-scan, block-internal-
paths, lint-curl-status-capture, branch-protection-drift).
Closes#156. Touches #142 (no capital-M Molecule-AI refs in this
file — already lowercase per e01077be).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
harness-replays.yml builds tenant-alpha + tenant-beta via tests/harness/
compose.yml using workspace-server/Dockerfile.tenant. Post-#173, that
Dockerfile expects .tenant-bundle-deps/{workspace-configs-templates,
org-templates,plugins} pre-cloned at the build context root. Sister
PR #38 added the pre-clone step to publish-workspace-server-image.yml
but missed harness-replays.yml.
Symptoms:
- main run #892 (2026-05-07T20:28:53Z): COPY
.tenant-bundle-deps/plugins -> failed to calculate checksum ...
not found.
- staging run #964 (2026-05-07T20:41:52Z): hits the OLD in-image
clone path (staging hasn't picked up the Dockerfile.tenant
refactor yet via auto-sync) and fails on
'fatal: could not read Username for https://git.moleculesai.app'
when cloning the first private workspace-template-* repo.
Fix: add the same Pre-clone step to harness-replays.yml,
mirroring publish-workspace-server-image.yml. Uses AUTO_SYNC_TOKEN
(devops-engineer persona PAT) per
feedback_per_agent_gitea_identity_default.
Once auto-sync main->staging unblocks (sister agent fixing the
7-file conflict in flight), staging will inherit both this workflow
fix AND the Dockerfile.tenant refactor atomically.
Refs: #168, #173
Refs Task #165 (Class D AUTO_SYNC_TOKEN plumbing).
main and staging diverged after the 2026-05-06 GitHub-org suspension
because Class D / Class G / feature work landed on staging while
unrelated CI fixes (#34-47, ECR auth-inline, buildx→docker, pre-clone
manifest deps) landed straight on main. Both branches edited the
same workflow files, so every push to main triggered an Auto-sync
run that aborted at `git merge --no-ff origin/main` with 7 content
conflicts:
- .github/workflows/canary-verify.yml (URL: github.com → Gitea)
- .github/workflows/ci.yml (3 URL refs)
- .github/workflows/publish-runtime.yml (cascade: HTTP repo-dispatch
→ Gitea push)
- .github/workflows/publish-workspace-server-image.yml
(drop AWS-action steps;
ECR auth is inline)
- .github/workflows/retarget-main-to-staging.yml (URL)
- manifest.json (lowercase org slug + add
mock-bigorg from main)
- scripts/clone-manifest.sh (keep main's MOLECULE_GITEA_TOKEN
auth path + drop awk-tolower
since manifest is now lowercase)
Resolution: union — staging's post-suspension Gitea/ECR migrations win
on URL/policy edits; main's additive work (mock-bigorg manifest entry,
inline ECR auth, MOLECULE_GITEA_TOKEN basic-auth) is preserved on top.
After this lands, staging is a strict superset of main, so the next
auto-sync run on a push to main will be a clean fast-forward / no-op.
The auto-sync workflow on main also picks up staging's AUTO_SYNC_TOKEN
swap (Class D #26) for free, fixing the latent layer-2 push-auth issue.
Verified locally:
- bash -n scripts/clone-manifest.sh
- python -c 'yaml.safe_load(...)' on each touched workflow
- python -c 'json.load(open(manifest.json))' (21 plugins, 9 templates,
7 org_templates)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Run #1010 (post-#46) succeeded all the way to push but failed with
"repository molecule-ai/platform does not exist" — the platform image
ECR repo had never been created (only platform-tenant existed).
Created the repo via:
aws ecr create-repository --region us-east-2 \
--repository-name molecule-ai/platform \
--image-scanning-configuration scanOnPush=true
This is a one-line workflow comment to satisfy the path-filter and
re-run the publish workflow against the now-existing repo. Closes#173
properly this time — pre-clone + inline ECR auth + ECR repo all in
place.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
CI run #987 (post-#45) showed `docker push` from shell still hits
"no basic auth credentials" — `aws-actions/amazon-ecr-login@v2`
writes auth to a step-scoped DOCKER_CONFIG that doesn't carry across
to the next shell step on Gitea Actions.
Fix: drop both `aws-actions/configure-aws-credentials@v4` and
`aws-actions/amazon-ecr-login@v2`. Run `aws ecr get-login-password |
docker login` inline in the same shell step as `docker build` +
`docker push`. AWS creds come from secrets via env vars, ECR token
is fresh per-step (12h validity is plenty), config.json lives in the
same shell process — auth state is guaranteed.
This is the operator-host manual approach mapped 1:1 into CI.
runner-base image already has aws-cli + docker (verified locally).
Closes#173 (fifth piece — and final, this matches the manual flow
exactly).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
CI run #946 (post-#43) confirmed `driver: docker` doesn't fix the ECR
push 401 either: buildx CLI inside the runner container talks to the
operator-host docker daemon (mounted socket), but the daemon doesn't
see the runner's ECR auth state, and the runner's buildx CLI doesn't
attach the auth header in a way the daemon accepts.
Drop buildx + build-push-action entirely. Plain `docker build` +
`docker push` from the runner container works because both use the
SAME docker socket + the SAME runner-container config.json (populated
by `aws ecr get-login-password | docker login` from amazon-ecr-login).
Trade-off: lose multi-arch support. We only ship linux/amd64 tenant
images today, so this is fine. If multi-arch becomes a requirement
later, we can revisit (likely with `docker buildx create
--driver=remote` pointing at an external buildkit, but that's
substantial infra work; not worth it for a single-arch shop).
Closes#173 (fourth piece — and hopefully last; this matches the
operator-host manual approach exactly).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Empty-shape commit on a tests/harness/** path to trigger the harness-replays
workflow's path-filter on staging, verifying that:
- PR #40 (Class G #168) migrated all explicit github.com/Molecule-AI URL refs
- PR #42 (Class G #168 followup) migrated the indirect clone-manifest.sh + manifest.json forms
After this run, harness-replays should get past the previously-failing
'fatal: could not read Username for https://github.com' clone-manifest step.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
PR #38 + #41 fixed the Dockerfile-side clone issue. CI run #893 then
revealed two Gitea-Actions-specific issues with the unchanged buildx
config:
1. `failed to push: 401 Unauthorized` to ECR. Root cause: default
buildx driver `docker-container` spawns a buildkit container that
doesn't share the host's `~/.docker/config.json`, so the ECR auth
set up by amazon-ecr-login doesn't reach the push. Fix: pin
`driver: docker` so buildx delegates to the host daemon, which
already has the ECR creds.
2. `dial tcp ...:41939: i/o timeout` on `_apis/artifactcache/cache`.
Root cause: `cache-from/cache-to: type=gha` is GitHub-specific;
Gitea Actions has no compatible artifact-cache backend, so every
cache lookup fails after a 30s timeout. Fix: remove the cache-*
options. Cold-build cost is <10min for 37-repo clone + Go/Node
compile, acceptable. Could revisit with type=registry inline cache
later if rebuilds get painful.
With this + #38/#41, the workflow should run end-to-end on Gitea
Actions: pre-clone -> docker build (host daemon) -> ECR push.
Closes#173 (third and final piece).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The Class G #168 PR (#40) caught explicit `github.com/Molecule-AI/<repo>`
URL literals in 23 files but missed two indirect forms:
- `scripts/clone-manifest.sh` lines 50,52 had
`https://github.com/${repo}.git` (the org/repo path is a variable, so the
Class-G regex `github\.com/Molecule-AI/` didn't match).
- `manifest.json` had `"Molecule-AI/<repo>"` (no `github.com` prefix; the
prefix gets prepended by the script).
Together these are what `Dockerfile.tenant`'s stage-3 templates RUN
actually fetches. After PR #40 the harness-replays workflow against
staging still fails with `fatal: could not read Username for
'https://github.com'` because the in-image build is the unfixed shell
loop.
This PR:
- scripts/clone-manifest.sh: replaces both clone URLs with
`https://git.moleculesai.app/${repo}.git`. Anonymous public clones
work for these repos (verified manually).
- manifest.json: lowercases `Molecule-AI/` to `molecule-ai/` to match
Gitea's canonical org slug. Gitea is case-insensitive so both work,
but the lowercase form matches every other URL in the org and is
what main's clone-manifest.sh (PR #38) already standardises on.
This is the minimum-diff staging fix. Sister #173 already shipped a
more sophisticated version on main (with optional MOLECULE_GITEA_TOKEN
auth + per-build pre-clone). When auto-sync resolves the staging-vs-main
conflict, this minimal version gets superseded by the main version
naturally.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The first PR (#38) only patched Dockerfile.tenant — but the workflow
also builds the platform image from workspace-server/Dockerfile, which
had the SAME in-image `git clone` stage. Build run #794 caught this:
"process clone-manifest.sh ... exit code 128" on the platform image.
Apply the same pre-clone shape to the platform Dockerfile: drop the
`templates` stage, COPY from .tenant-bundle-deps/ instead. The
workflow's existing "Pre-clone manifest deps" step (added in #38)
already populates .tenant-bundle-deps/ before either build runs, so no
workflow change needed.
Self-review note: the missed-platform-Dockerfile is a Phase 1 quality
miss — I read both files but only registered the tenant one as
in-scope. Saved memory `feedback_orchestrator_must_verify_before_declaring_fixed`
applies: should have grepped the whole workspace-server/ for "templates"
stages before claiming Task #173 done. CI run #794 caught it within
~6 minutes; net cost: one followup commit.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
TestPooledWithEICTunnel_PreservesFnErr (and any sqlmock-using neighbour
test) was at risk of inheriting stale INSERT calls from a previous
test's coalesceRestart goroutine that survived its t.Cleanup boundary.
The production callsite shape is `go h.RestartByID(...)` from
a2a_proxy.go, a2a_proxy_helpers.go and main.go. When that goroutine's
runRestartCycle panics, coalesceRestart's deferred recover swallows it
to keep the platform process alive — but in tests, nothing waits for
the goroutine to fully exit. If it's still draining LogActivity-shaped
work after the test returns, those INSERTs land in the next test's
sqlmock connection as kind=DELEGATION_FAILED /
kind=WORKSPACE_PROVISION_FAILED, surfacing as "INSERT-not-expected".
Fix: introduce drainCoalesceGoroutine(t, wsID, cycle) test helper that
spawns coalesceRestart on a goroutine (matching production) and
registers a t.Cleanup with sync.WaitGroup.Wait so the test can't
declare itself done while a goroutine is still alive.
Convert TestCoalesceRestart_PanicInCycleClearsState to use the helper
(previously it called coalesceRestart synchronously, which never
exercised the production goroutine-survival contract).
Add TestCoalesceRestart_DrainHelperWaitsForGoroutineExit as the
regression guard: cycle blocks 150ms then panics; the test asserts
t.Run elapsed >= 150ms (proving the Wait barrier engaged) AND the
deferred close ran (proving the panic-recovery defer chain executed)
AND state.running was cleared. Verified the assertion is real by
mutation-testing: removing t.Cleanup(wg.Wait) makes this test FAIL
deterministically with elapsed <300µs.
Per saved memory feedback_assert_exact_not_substring: the regression
test asserts an exact-shape contract (elapsed >= blockFor) rather than
a substring-in-output, so it discriminates between "drain works" and
"drain skipped".
Per Phase 3: 10/10 race-detector runs pass for all TestCoalesceRestart_*
tests. Full ./internal/handlers/... suite green.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
TestPooledWithEICTunnel_PreservesFnErr (and any sqlmock-using neighbour
test) was at risk of inheriting stale INSERT calls from a previous
test's coalesceRestart goroutine that survived its t.Cleanup boundary.
The production callsite shape is `go h.RestartByID(...)` from
a2a_proxy.go, a2a_proxy_helpers.go and main.go. When that goroutine's
runRestartCycle panics, coalesceRestart's deferred recover swallows it
to keep the platform process alive — but in tests, nothing waits for
the goroutine to fully exit. If it's still draining LogActivity-shaped
work after the test returns, those INSERTs land in the next test's
sqlmock connection as kind=DELEGATION_FAILED /
kind=WORKSPACE_PROVISION_FAILED, surfacing as "INSERT-not-expected".
Fix: introduce drainCoalesceGoroutine(t, wsID, cycle) test helper that
spawns coalesceRestart on a goroutine (matching production) and
registers a t.Cleanup with sync.WaitGroup.Wait so the test can't
declare itself done while a goroutine is still alive.
Convert TestCoalesceRestart_PanicInCycleClearsState to use the helper
(previously it called coalesceRestart synchronously, which never
exercised the production goroutine-survival contract).
Add TestCoalesceRestart_DrainHelperWaitsForGoroutineExit as the
regression guard: cycle blocks 150ms then panics; the test asserts
t.Run elapsed >= 150ms (proving the Wait barrier engaged) AND the
deferred close ran (proving the panic-recovery defer chain executed)
AND state.running was cleared. Verified the assertion is real by
mutation-testing: removing t.Cleanup(wg.Wait) makes this test FAIL
deterministically with elapsed <300µs.
Per saved memory feedback_assert_exact_not_substring: the regression
test asserts an exact-shape contract (elapsed >= blockFor) rather than
a substring-in-output, so it discriminates between "drain works" and
"drain skipped".
Per Phase 3: 10/10 race-detector runs pass for all TestCoalesceRestart_*
tests. Full ./internal/handlers/... suite green.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
publish-workspace-server-image.yml could not run on Gitea Actions because
Dockerfile.tenant's stage 3 ran `git clone` against private Gitea repos
from inside the Docker build context, where no auth path exists. Every
workspace-server rebuild required a manual operator-host push.
Move cloning to the trusted CI context (where AUTO_SYNC_TOKEN — the
devops-engineer persona PAT — is naturally available). Dockerfile.tenant
now COPYs from .tenant-bundle-deps/, populated by the workflow's new
"Pre-clone manifest deps" step. The Gitea token never enters the image.
- scripts/clone-manifest.sh: optional MOLECULE_GITEA_TOKEN env embeds
basic-auth in the clone URL; redacted in log output. Anonymous fallback
preserved for future public-repo path.
- .github/workflows/publish-workspace-server-image.yml: new pre-clone
step before docker build; injects AUTO_SYNC_TOKEN. Fail-fast if the
secret is empty.
- workspace-server/Dockerfile.tenant: drop stage 3 (templates), COPY
from .tenant-bundle-deps/ instead. Header documents the prereq.
- .gitignore: ignore /.tenant-bundle-deps/ so a local build can't
accidentally commit cloned repos.
Verified locally: clone-manifest.sh with the devops-engineer persona
token cloned all 37 repos (9 ws + 7 org + 21 plugins, 4.9MB after
.git strip).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Same SSOT-divergence shape as #10 / fixed in #12, but on the a2a-proxy
code path. The plugin handler was routed through `provisioner.RunningContainerName`;
a2a-proxy was forwarding optimistically and only catching missing containers
REACTIVELY via `maybeMarkContainerDead` after the network call timed out.
Result on tenants whose agent containers had been recycled (e.g. post-EC2
replace from molecule-controlplane#20): canvas waits 2-30s for the network
forward to fail before getting a 503, and the workspace-server logs only
"ProxyA2A forward error" without the "container is dead" signal.
This PR adds a proactive `Provisioner.IsRunning` check in `proxyA2ARequest`
between `resolveAgentURL` and `dispatchA2A`, gated on the conditions where
we know we're talking to a sibling Docker container we own (`h.provisioner
!= nil` AND `platformInDocker` AND the URL was rewritten to Docker-DNS form).
Three outcomes via the SSOT helper:
(true, nil) → forward as today
(false, nil) → fast-503 with `error="workspace container not running —
restart triggered"`, `restarting=true`, `preflight=true`,
plus the same offline-flip + WORKSPACE_OFFLINE broadcast +
async restart that `maybeMarkContainerDead` produces
(true, err) → fall through to optimistic forward (matches IsRunning's
"fail-soft as alive" contract — flaky daemon must not
trigger a restart cascade)
The `preflight=true` flag in the response distinguishes the proactive
short-circuit from the reactive `maybeMarkContainerDead` path so canvas
or downstream callers can render distinct messages later.
* `internal/handlers/a2a_proxy.go` — preflight call site between
resolveAgentURL and dispatchA2A; gated on `h.provisioner != nil &&
platformInDocker && url == http://<ContainerName(id)>:port`.
* `internal/handlers/a2a_proxy_helpers.go` — `preflightContainerHealth`
helper. Routes through `h.provisioner.IsRunning` (which itself wraps
`RunningContainerName`). Identical offline-flip side-effects as
`maybeMarkContainerDead` for the dead-container case.
* `internal/handlers/a2a_proxy_preflight_test.go` — 4 tests: running →
nil; not-running → structured 503 + sqlmock expectations on the
offline-flip + structure_events insert; transient error → nil
(fail-soft); AST gate pinning the SSOT routing (mirror of #12's gate).
Mutation-tested: removing the `if running { return nil }` guard makes
the production code fail to compile (unused var). A subtler mutation
(replacing the !running branch with `return nil`) would make
TestPreflight_ContainerNotRunning_StructuredFastFail fail at runtime
with sqlmock's "expected DB call did not occur."
Refs: molecule-core#36. Companion to #12 (issue #10).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds a 'mock' runtime: virtual workspaces with no container, no EC2,
no LLM. Every A2A reply is synthesised from a small canned-variant
pool ('On it!', 'Got it, on it now.', etc.) deterministically seeded
by (workspace_id, request_id).
Built for funding-demo "200-workspace mock org" — renders an
enterprise-scale org chart on the canvas (CEO/VPs/Managers/ICs)
without burning real LLM credits or provisioning 200 EC2 instances.
Surfaces:
- workspace-server/internal/handlers/mock_runtime.go: A2A proxy
short-circuit, canned-reply pool, deterministic variant pick.
- workspace-server/internal/handlers/a2a_proxy.go: gate the
short-circuit before resolveAgentURL (mock has no URL).
- workspace-server/internal/handlers/org_import.go: skip Docker
provisioning for mock workspaces, set status='online' directly,
drop the per-sibling 2s pacing for mock children (collapses
a 200-workspace import from ~7min → ~1s).
- workspace-server/internal/handlers/runtime_registry.go: register
'mock' in the runtime allowlist (manifest + fallback set).
- workspace-server/internal/registry/healthsweep.go +
orphan_sweeper.go: skip mock workspaces in container-health and
stale-token sweeps (no container by design).
- workspace-server/internal/handlers/workspace_restart.go: mirror
the 'external' Restart no-op for mock.
- manifest.json: register the new
Molecule-AI/molecule-ai-org-template-mock-bigorg repo.
Tests: 5 new in mock_runtime_test.go covering happy-path, non-mock
regression guard, determinism, IsMockRuntime trim/case, JSON-RPC
id echo. All existing handler + registry tests still pass.
Local-verified: imported the 200-workspace template against a fresh
postgres+redis, confirmed all 200 land in 'online' and stay there
through the 30s health-sweep window, exercised A2A on CEO + VPs +
Managers + ICs and saw the variant pool rotate.
Org template lives at
Molecule-AI/molecule-ai-org-template-mock-bigorg (created today)
and is imported via the existing /org/import flow on the canvas
Template Palette.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Funding-demo Mock #1: when the canvas loads with `?purchase_success=1`,
show a centred success modal in the warm-paper theme. Auto-dismisses
after 5s; Close button + Esc + backdrop click also dismiss; URL params
are stripped on first paint so a refresh after dismiss does not
re-trigger.
Mounted in `app/layout.tsx` (not `app/page.tsx`) so the modal persists
across the canvas page-state transitions (loading → hydrated → error)
without unmounting and losing its open-state.
No real billing logic — the marketplace "Purchase" button on the
landing page redirects here with the flag; this modal is the only
thing the user sees of the "transaction".
Local-verified end-to-end via playwright (5/5 tests pass): redirect
URL shape, modal visibility, URL cleanup, close button, refresh-after-
dismiss behaviour, 5s auto-dismiss.
Pairs with the Purchase button added to landingpage Marketplace
section.
scripts/clone-manifest.sh runs inside the platform Dockerfile build,
so a change to that script needs to retrigger publish. Without it,
the prior fix (clone via Gitea + lowercase org) didn't trigger this
workflow because scripts/ wasn't in the path filter.
Also serves as the file change to satisfy the path filter for THIS
push, retriggering publish-workspace-server-image now.
Post-2026-05-06 GitHub-org suspension: scripts/clone-manifest.sh
was still pointing at https://github.com/${repo}.git, so the
Docker build for workspace-server'\''s platform image fails at:
fatal: could not read Username for 'https://github.com':
No such device or address
with no credentials available in the build container.
Fix: clone from https://git.moleculesai.app/${repo}.git instead.
manifest.json'\''s repo paths still read 'Molecule-AI/...' (the
historic GitHub slug, mixed-case); Gitea lowercases the org
component to 'molecule-ai/...'. Lowercase the org segment on
the fly with awk so we don'\''t need to rewrite every manifest
entry.
Local verify: bash -n passes, lowercase transform produces correct
Gitea paths, anonymous git clone of one of the manifest plugins
over HTTPS to git.moleculesai.app succeeds.
Class G in the prod-ship CI sweep — same shape as the github.com
ref Harness Replays hits, this is the second instance found.
Two coupled cleanups for the post-2026-05-06 stack:
============================================
The plugin injected GITHUB_TOKEN/GH_TOKEN via the App's
installation-access flow (~hourly rotation). Per-agent Gitea
identities replaced this approach after the 2026-05-06 suspension —
workspaces now provision with a per-persona Gitea PAT from .env
instead of an App-rotated token. The plugin code itself lived on
github.com/Molecule-AI/molecule-ai-plugin-github-app-auth which is
also unreachable post-suspension; checking it out at CI build time
was already failing.
Removed:
- workspace-server/cmd/server/main.go: githubappauth import + the
`if os.Getenv("GITHUB_APP_ID") != ""` block that called
BuildRegistry. gh-identity remains as the active mutator.
- workspace-server/Dockerfile + Dockerfile.tenant: COPY of the
sibling repo + the `replace github.com/Molecule-AI/molecule-ai-
plugin-github-app-auth => /plugin` directive injection.
- workspace-server/go.mod + go.sum: github-app-auth dep entry
(cleaned up by `go mod tidy`).
- 3 workflows: actions/checkout steps for the sibling plugin repo:
- .github/workflows/codeql.yml (Go matrix path)
- .github/workflows/harness-replays.yml
- .github/workflows/publish-workspace-server-image.yml
Verified `go build ./cmd/server` + `go vet ./...` pass post-removal.
=======================================================
Same workflow used to push to ghcr.io/molecule-ai/platform +
platform-tenant. ghcr.io/molecule-ai is gone post-suspension. The
operator's ECR org (153263036946.dkr.ecr.us-east-2.amazonaws.com/
molecule-ai/) already hosts platform-tenant + workspace-template-*
+ runner-base images and is the post-suspension SSOT for container
images. This PR aligns publish-workspace-server-image with that
stack.
- env.IMAGE_NAME + env.TENANT_IMAGE_NAME repointed to ECR URL.
- docker/login-action swapped for aws-actions/configure-aws-
credentials@v4 + aws-actions/amazon-ecr-login@v2 chain (the
standard ECR auth pattern; uses AWS_ACCESS_KEY_ID/SECRET secrets
bound to the molecule-cp IAM user).
The :staging-<sha> + :staging-latest tag policy is unchanged —
staging-CP's TENANT_IMAGE pin still points at :staging-latest, just
with the new registry prefix.
Refs molecule-core#157, #161; parallel to org-wide CI-green sweep.
Gitea is case-sensitive on owner slugs; canonical is lowercase
`molecule-ai/...`. Mixed-case `Molecule-AI/...` refs fail-at-0s
when the runner tries to resolve the cross-repo workflow / checkout.
Same fix as molecule-controlplane#12. Mechanical case-correction;
no behavior change beyond making CI resolve again.
Refs: internal#46
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two coupled fixes for molecule-core#10 (plugin install 503 vs
status=online split-state):
1. SSOT for "is this workspace's container running" — `findRunningContainer`
in plugins.go used to carry its own copy of `cli.ContainerInspect`, which
collapsed transient daemon errors into the same `""` return as a
genuinely-stopped container. Healthsweep's `Provisioner.IsRunning`
handled the same input correctly (defensive). Promote the inspect logic
to `provisioner.RunningContainerName`, route both consumers through it.
Transient errors get a distinct log line on the plugins side so triage
doesn't confuse a flaky daemon with a stopped container.
2. Runtime-aware Install/Uninstall — `runtime='external'` workspaces have
no local container; push-install via docker exec is meaningless. They
pull plugins via the download endpoint instead (Phase 30.3). Without a
guard they fell through to `findRunningContainer` and 503'd with a
misleading "container not running." Add an early 422 with a hint
pointing at the download endpoint.
The two fixes are independent: (1) preserves correctness when the SSOT
helper is later modified; (2) eliminates the persistent split-state on
the 5 external persona-agent workspaces in this DB (and on tenant
deployments hitting the same shape).
* `internal/provisioner/provisioner.go` — new `RunningContainerName(ctx,
cli, id) (string, error)` with three documented outcomes (running /
stopped / transient). `Provisioner.IsRunning` now wraps it; behavior
preserved.
* `internal/handlers/plugins.go` — `findRunningContainer` shimmed onto
`RunningContainerName`; new `isExternalRuntime(id)` predicate.
* `internal/handlers/plugins_install.go` — Install + Uninstall reject
external runtimes with 422 + hint, before the source-fetch step.
* `internal/handlers/plugins_install_external_test.go` — 5 cases:
external→422, uninstall-external→422, container-backed-falls-through,
no-runtime-lookup-fails-open, lookup-error-fails-open.
* `internal/handlers/plugins_findrunning_ssot_test.go` — two AST gates
pin the SSOT routing so future PRs can't silently re-introduce the
parallel impl. Mutation-tested: reverting either consumer to a direct
`ContainerInspect` makes the gate fail.
Refs: molecule-core#10
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
In dev mode (`MOLECULE_ENV=dev|development`, `ADMIN_TOKEN` unset) the
AdminAuth chain fails open by design so canvas at :3000 can call
workspace-server at :8080 without a bearer token. Combined with the
existing wildcard bind on `:8080`, that exposed unauthenticated
`POST /workspaces` to any same-LAN peer (S-8 in the audit RFC v1).
Couple the bind narrowness to the same signal that drives the auth
fail-open: when `middleware.IsDevModeFailOpen()` returns true, default
the listener to `127.0.0.1`. Production (`ADMIN_TOKEN` set) keeps
binding to all interfaces — its auth chain is doing the work. Operators
who need LAN exposure set `BIND_ADDR=<host>` explicitly.
* `cmd/server/main.go` — `resolveBindHost()` precedence: BIND_ADDR
explicit > IsDevModeFailOpen() loopback > "" (all interfaces).
Startup log line now includes the resolved bind + dev-mode-fail-open
state for post-deploy auditing.
* `cmd/server/bind_test.go` — 8 t.Setenv table cases covering
precedence, explicit overrides, dev/prod env words. Mutation-tested:
removing the `IsDevModeFailOpen()` branch makes the dev-mode cases
fail with "" vs "127.0.0.1".
Refs: molecule-core#7
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-06 22:29:24 -07:00
149 changed files with 13197 additions and 2173 deletions
echo" [debug] teams-list body (first 300 chars):" >&2
head -c 300"$ORG_TEAMS_FILE" >&2;echo >&2
fi
if["$HTTP_CODE" !="200"];then
echo"::error::GET /orgs/${OWNER}/teams returned HTTP $HTTP_CODE — token likely lacks read:org scope. Add a SOP_TIER_CHECK_TOKEN secret with read:organization scope at the org level."
exit1
fi
declare -A TEAM_ID
for T in $ELIGIBLE;do
ID=$(jq -r --arg t "$T"'.[] | select(.name==$t) | .id' <"$ORG_TEAMS_FILE"| head -1)
debug "probe: $U in team $T (id=$ID) → HTTP $CODE"
if["$CODE"="200"]||["$CODE"="204"];then
echo"::notice::approver $U is in team $T (eligible for $TIER)"
OK="yes"
break
fi
done
[ -n "$OK"]&& break
done
if[ -z "$OK"];then
echo"::error::Tier $TIER requires approval from a non-author member of {$ELIGIBLE}. Got approvers: $APPROVERS — none of them satisfied team membership. Set SOP_DEBUG=1 to see per-probe HTTP codes."
exit1
fi
echo"::notice::sop-tier-check passed: $TIER, approver in {$ELIGIBLE}"
Automated promotion of \`staging\` (\`${TARGET_SHA:0:8}\`) to \`main\`. All required staging gates green at this SHA: CI, E2E Staging Canvas, E2E API Smoke, CodeQL.
This PR is auto-generated by \`.github/workflows/auto-promote-staging.yml\` whenever every required gate completes green on the same staging SHA. It exists because main's branch protection requires status checks "set by the expected GitHub apps" — direct \`git push\` from a workflow can't satisfy that, only PR merges through the queue can.
Merge queue lands this; no human action needed unless gates fail. Reverse-direction sync (the merge commit on main → staging) is handled by \`auto-sync-main-to-staging.yml\`.
if [ -n "$MERGED" ] && [ "$MERGED" != "null" ]; then
echo "::notice::Promote PR#${PR_NUM} merged at ${MERGED}"
break
fi
if [ "$STATE" = "CLOSED" ]; then
echo "::warning::Promote PR#${PR_NUM} was closed without merging — skipping deploy dispatch."
exit 0
fi
sleep 30
done
if [ -z "$MERGED" ] || [ "$MERGED" = "null" ]; then
echo "::warning::Promote PR#${PR_NUM} didn't merge within 30min — skipping deploy dispatch (manually run \`gh workflow run publish-workspace-server-image.yml --ref main\` once it lands)."
exit 0
fi
# Dispatch publish on main using the App token. App-initiated
# workflow_dispatch DOES propagate the workflow_run cascade,
# unlike GITHUB_TOKEN-initiated dispatch.
# publish completes → canary-verify chains via workflow_run →
# redeploy-tenants-on-main chains via workflow_run + branches:[main].
if gh workflow run publish-workspace-server-image.yml \
--repo "$REPO" --ref main 2>&1; then
echo "::notice::Dispatched publish-workspace-server-image on ref=main as molecule-ai App — canary-verify and redeploy-tenants-on-main will chain via workflow_run."
{
echo "## 🚀 Tenant redeploy chain dispatched"
echo
echo "- publish-workspace-server-image (workflow_dispatch on \`main\`, actor: \`molecule-ai[bot]\`)"
echo "- canary-verify will chain on completion"
echo "- redeploy-tenants-on-main will chain on canary green"
}>> "$GITHUB_STEP_SUMMARY"
else
echo "::error::Failed to dispatch publish-workspace-server-image. Run manually: gh workflow run publish-workspace-server-image.yml --ref main"
fi
# ALSO dispatch auto-sync-main-to-staging.yml. Same root cause as
# publish above (issue #2357): the merge-queue-initiated push to
# main is by GITHUB_TOKEN → no `on: push` triggers fire downstream.
# Without this dispatch, every staging→main promote leaves staging
# one merge commit BEHIND main, which silently dead-locks the NEXT
# promote PR as `mergeStateStatus: BEHIND` because main's
# branch-protection has `strict: true`. Verified empirically on
# 2026-05-02 against PR #2442 (Phase 2 promote): only the explicit
# publish-workspace-server-image dispatch fired on the previous
# promote SHA 76c604fb, while auto-sync silently no-op'd, leaving
# staging behind for ~24h until manually bridged.
if gh workflow run auto-sync-main-to-staging.yml \
--repo "$REPO" --ref main 2>&1; then
echo "::notice::Dispatched auto-sync-main-to-staging on ref=main as molecule-ai App — staging will absorb the new main merge commit via PR + merge queue."
else
echo "::error::Failed to dispatch auto-sync-main-to-staging. Run manually: gh workflow run auto-sync-main-to-staging.yml --ref main"
# Find existing PR for this branch (idempotent on workflow
# restart) before creating a new one.
PR_NUM=$(gh pr list --head "$BRANCH" --base staging --state open --json number --jq '.[0].number // ""')
if [ -z "$PR_NUM" ]; then
# Body lives in a temp file to keep the multi-line content
# out of the YAML block scalar (un-indented newlines inside
# an inline shell string break YAML parsing).
BODY_FILE=$(mktemp)
if [ "$DID_FF" = "true" ]; then
TITLE="chore: sync main → staging (auto, ff to ${MAIN_SHORT})"
cat > "$BODY_FILE" <<EOFBODY
Automated fast-forward of \`staging\` to \`origin/main\` (\`${MAIN_SHORT}\`). Staging has no in-flight commits that diverge from main. Merge queue lands this; no human action needed.
This PR is auto-generated by \`.github/workflows/auto-sync-main-to-staging.yml\` on every push to \`main\`. It exists because this repo's \`staging\` branch has a \`merge_queue\` ruleset that blocks direct pushes — even from the GitHub Actions integration.
EOFBODY
else
TITLE="chore: sync main → staging (auto, merge ${MAIN_SHORT})"
cat > "$BODY_FILE" <<EOFBODY
Automated merge of \`origin/main\` (\`${MAIN_SHORT}\`) into \`staging\`. Staging has commits main doesn't, so this is a non-ff merge that absorbs main's tip. Merge queue lands this.
This PR is auto-generated by \`.github/workflows/auto-sync-main-to-staging.yml\` on every push to \`main\`.
EOFBODY
fi
# gh pr create prints the URL on stdout; extract the PR number.
gh pr comment "$PR" -R "$REPO" --body "🔒 Auto-merge disabled — new commit (\`${NEW_SHA:0:7}\`) pushed after auto-merge was enabled. The merge queue locks SHAs at entry, so subsequent pushes can race. Verify the new commit and re-enable with \`gh pr merge --auto\`."
- name:Gitea no-op
if:steps.host.outputs.is_gitea == 'true'
run:echo "Gitea Actions — auto-merge gating not applicable; no-op (job intentionally green so branch protection's required-check name lands SUCCESS)."
# Specifically match the 422 duplicate-base/head error so
# any OTHER PATCH failure (auth, deleted PR, etc.) still
# surfaces as a real workflow failure.
if echo "$PATCH_OUTPUT" | grep -q "pull request already exists for base branch 'staging'"; then
echo "::notice::PR#${PR_NUMBER}: duplicate target-staging PR exists on same head — closing this main-PR as redundant."
gh pr close "$PR_NUMBER" \
--repo "${{ github.repository }}" \
--comment "[retarget-bot] Closing — another PR on the same head branch already targets \`staging\`. This PR is redundant. See issue#1884 for the rationale."
echo "::error::Retarget PATCH failed and was NOT a duplicate-base error:"
echo "$PATCH_OUTPUT" >&2
exit 1
- name:Post explainer comment
if:steps.retarget.outputs.outcome == 'retargeted'
env:
GH_TOKEN:${{ secrets.GITHUB_TOKEN }}
PR_NUMBER:${{ github.event.pull_request.number }}
run:|
gh pr comment "$PR_NUMBER" \
--repo "${{ github.repository }}" \
--body "$(cat <<'BODY'
[retarget-bot] This PR was opened against `main` and has been retargeted to `staging` automatically.
**Why:**per [SHARED_RULES rule 8](https://git.moleculesai.app/molecule-ai/molecule-ai-org-template-molecule-dev/blob/main/SHARED_RULES.md), all feature work targets `staging` first; the CEO promotes `staging → main` separately.
**Whatchanged:** just the base branch — no code change. CI will re-run against `staging`. If you get merge conflicts, rebase on `staging`.
**Ifthis PR is the CEO's staging→main promotion:** the Action skipped you (only bot-authored PRs are retargeted). If you see this comment on your CEO PR, that's a bug — please tag @HongmingWang-Rabbit.
# ADR-002: Local-build mode signalled by `MOLECULE_IMAGE_REGISTRY` presence
* Status: Accepted (2026-05-07)
* Issue: #63 (closes Task #194)
* Decision: Hongming (CTO) + Claude Opus 4.7 (implementation)
## Context
Pre-2026-05-06, every Molecule deployment — both production tenants and OSS contributor laptops — pulled workspace-template-* container images from `ghcr.io/molecule-ai/`. Production tenants additionally set `MOLECULE_IMAGE_REGISTRY` to an AWS ECR mirror via Railway env / EC2 user-data, but the OSS default was the upstream GHCR org.
On 2026-05-06 the `Molecule-AI` GitHub org was suspended (saved memory: `feedback_github_botring_fingerprint`). GHCR now returns **403 Forbidden** for every `molecule-ai/workspace-template-*` manifest. OSS contributors who clone `molecule-core` and run `go run ./workspace-server/cmd/server` cannot provision a workspace — every first provision fails with:
```
docker image "ghcr.io/molecule-ai/workspace-template-claude-code:latest" not found after pull attempt
```
Production tenants are unaffected (their `MOLECULE_IMAGE_REGISTRY` points at ECR, which we still control), but OSS onboarding is broken. Workspace template repos are intentionally separate from `molecule-core` (each runtime is OSS-shape and forkable), and they are mirrored to Gitea (`https://git.moleculesai.app/molecule-ai/molecule-ai-workspace-template-<runtime>`) — but the provisioner has no path that consumes Gitea source directly.
## Decision
When `MOLECULE_IMAGE_REGISTRY` is **unset** (or empty), the provisioner switches to a **local-build mode** that:
1. Looks up the workspace-template repo's HEAD sha on Gitea via a single API call.
2. Checks whether a SHA-pinned local image (`molecule-local/workspace-template-<runtime>:<sha12>`) already exists; if so, reuses it.
3. Otherwise shallow-clones the repo into `~/.cache/molecule/workspace-template-build/<runtime>/<sha12>/` and runs `docker build --platform=linux/amd64 -t <tag> .`.
4. Hands the SHA-pinned tag to Docker for ContainerCreate, bypassing the registry-pull path entirely.
When `MOLECULE_IMAGE_REGISTRY` is **set**, behavior is unchanged: pull the image from that registry. Existing prod tenants and self-hosters who mirror to a private registry are not affected.
## Consequences
### Positive
* **Zero-config OSS onboarding** — `git clone molecule-core && go run ./workspace-server/cmd/server` boots end-to-end without any registry credentials.
* **Production tenants protected** — same env var, same semantics in SaaS-mode. Migration is a no-op.
* **No new env var** — extending an existing var's semantics ("where to pull, OR build locally if absent") rather than introducing `MOLECULE_LOCAL_BUILD=1` keeps the surface small.
* **SHA-pinned cache** — repeat builds are O(API-call); only template-repo HEAD changes invalidate.
* **Production-parity image** — amd64 emulation on Apple Silicon honours `feedback_local_must_mimic_production`. The provisioner's existing `defaultImagePlatform()` already forces amd64 for parity; building amd64 locally lets that decision stay consistent.
### Negative
* **Conflates two concerns** — `MOLECULE_IMAGE_REGISTRY` now signals BOTH "where to pull" AND "build locally if absent." A future operator who unsets it expecting a hard error will instead get a slow first-provision. Documented in the runbook.
* **First-provision is slow on Apple Silicon** — 5–10 min via QEMU emulation on the cold path. Mitigated by SHA-cache (subsequent runs are <1slookup+0sbuild).
* **Coverage gap** — only 4 of 9 runtimes are mirrored to Gitea today (`claude-code`, `hermes`, `langgraph`, `autogen`). The other 5 fail with an actionable "not mirrored" error. Mirroring those repos is a separate task.
* **Implicit trust boundary** — operator running `go run` implicitly trusts `molecule-ai/molecule-ai-workspace-template-*` repos on Gitea. This is the same trust they would extend to the GHCR images today; not a new attack surface.
## Alternatives considered
1. **New env var `MOLECULE_LOCAL_BUILD=1`** — explicit, but requires OSS contributors to know it exists. Violates the zero-config goal.
2. **Push pre-built images to a Gitea container registry, mirror tag from upstream** — operationally cleaner but: (a) Gitea's container-registry add-on isn't deployed on the operator host, (b) defeats the OSS-contributor goal of "hack on the source, see your changes," since they'd still pull a stale image.
3. **Embed Dockerfiles in molecule-core itself, drop the standalone template repos** — would work but breaks the OSS-shape principle; templates are intentionally separable, anyone-can-fork artifacts.
4. **Build native arch on Apple Silicon (arm64) and drop the platform pin in local-mode** — fast, but creates `linux/arm64` images that diverge from the amd64-only prod runtime. Local-vs-prod debug behavior would diverge. Rejected per `feedback_local_must_mimic_production`.
## Security review
* **Gitea repo URL allowlist** — runtime name must be in the `knownRuntimes` allowlist (defence-in-depth against a future code path that lets cfg.Runtime carry untrusted input). Repo prefix is hardcoded to `https://git.moleculesai.app/molecule-ai/molecule-ai-workspace-template-`; forks can override via `MOLECULE_LOCAL_TEMPLATE_REPO_PREFIX` (opt-in, default off).
* **Token handling** — clones are anonymous over HTTPS by default (templates are public). `MOLECULE_GITEA_TOKEN`, if set, is passed via URL userinfo for the clone and as `Authorization: token` for the API call. The token is **masked in every log line** via `maskTokenInURL` / `maskTokenInString` and never appears in the cache dir path.
* **No silent fallback** — if Gitea is unreachable or the runtime isn't mirrored, we return a clear error mentioning the repo URL and the missing runtime. We **never** fall back to GHCR/ECR (that would be a confusing bug for an OSS contributor who happened to have stale ECR creds in their docker config).
* **Build-arg injection** — `docker build` is invoked with NO `--build-arg` from external input. Dockerfile is consumed as-is.
* **Cache poisoning** — cache key is the Gitea HEAD sha + Dockerfile content; a force-push to the template repo's main branch regenerates the key on next run. Cache dir is per-user (`$HOME/.cache`), so cross-user attacks aren't relevant in single-user dev mode.
## Versioning + back-compat
* Existing prod tenants set `MOLECULE_IMAGE_REGISTRY=<ECR url>` → unchanged behavior.
* Existing local installs that set the var → unchanged behavior.
* Existing local installs that don't set it → switch to local-build path. Migration: none required (additive); first provision will take 5–10 min instead of failing.
@ -67,7 +67,7 @@ On-demand fits naturally with how agents work — an agent only needs to know ab
This is acceptable for MVP because:
- All workspaces are provisioned by the same platform on trusted infrastructure
- Docker network isolation (`molecule-monorepo-net`) limits who can reach workspace endpoints
- Docker network isolation (`molecule-core-net`) limits who can reach workspace endpoints
- The tool is self-hosted — the operator controls the network
**Known gap:** Once workspace A caches workspace B's URL, nothing stops A from calling B directly even after the hierarchy changes and A is no longer supposed to reach B. The cached URL remains valid until the container is restarted or the URL changes.
**Status:** living document — update when you ship a feature that touches one backend.
**Owner:** workspace-server + controlplane teams.
**Last audit:** 2026-05-05 (Claude agent — `provisionWorkspaceAuto` / `StopWorkspaceAuto` / `HasProvisioner` SoT pattern landed in PRs #2811 + #2824).
**Last audit:** 2026-05-07 (plugin install/uninstall closed for EC2 backend via EIC SSH push to the bind-mounted `/configs/plugins/<name>/`, mirroring the Files API PR #1702 pattern).
## Why this exists
@ -54,7 +54,7 @@ For "do we have any backend?", use `HasProvisioner()`, never bare `h.provisioner
| **Files API** | | | | |
| List / Read / Write / Replace / Delete | `container_files.go`, `template_import.go` | `docker exec` + tar `CopyToContainer` | SSH via EIC tunnel (PR #1702) | ✅ parity as of 2026-04-22 (previously docker-only) |
| **Plugins** | | | | |
| Install / uninstall / list | `plugins_install.go`| `deliverToContainer()` + volume rm | **gap — no live plugin delivery** | 🔴 **docker-only** |
| Install / uninstall / list | `plugins_install.go`+ `plugins_install_eic.go` | `deliverToContainer()` → exec+`CopyToContainer` on local container | `instance_id` set → EIC SSH push of the staged tarball into the EC2's bind-mounted `/configs/plugins/<name>/` (per `workspaceFilePathPrefix`), `chown 1000:1000`, restart | ✅ parity |
@ -19,7 +19,7 @@ The provisioner is the platform component that deploys workspace containers and
## Docker Networking (Tier 1-3, Tier 4 uses host)
All workspace containers join the `molecule-monorepo-net` Docker network. Containers are named `ws-{id[:12]}` (first 12 chars of workspace UUID). Two exported helpers in `provisioner` package provide the canonical naming:
All workspace containers join the `molecule-core-net` Docker network. Containers are named `ws-{id[:12]}` (first 12 chars of workspace UUID). Two exported helpers in `provisioner` package provide the canonical naming:
@ -38,7 +38,7 @@ This URL is pre-stored in both Postgres and Redis before the agent registers. Wh
**Why not use Docker-internal URLs?** In local dev, the platform runs on the host (not in Docker), so it cannot resolve Docker container hostnames. The ephemeral port mapping lets the A2A proxy reach agents via localhost. In production (platform in Docker), the Docker-internal URL (`http://ws-{id}:8000`) would work directly.
**Workspace-to-workspace discovery:** When a workspace discovers another workspace (via `X-Workspace-ID` header on `GET /registry/discover/:id`), the platform returns the Docker-internal URL (`http://ws-{first12chars}:8000`) so containers can reach each other directly on `molecule-monorepo-net`. The internal URL is cached in Redis at provision time and also synthesized as a fallback if the cache misses (only for online/degraded workspaces).
**Workspace-to-workspace discovery:** When a workspace discovers another workspace (via `X-Workspace-ID` header on `GET /registry/discover/:id`), the platform returns the Docker-internal URL (`http://ws-{first12chars}:8000`) so containers can reach each other directly on `molecule-core-net`. The internal URL is cached in Redis at provision time and also synthesized as a fallback if the cache misses (only for online/degraded workspaces).
For external HTTPS access (multi-host mode), Nginx on the host handles TLS termination and proxies to the container.
Canvas supports **three themes**: System (follows OS), Light, Dark. Controlled via `ThemeProvider` in `theme-provider.tsx` with preference persisted in `mol_theme` cookie.
**Key principle: Use semantic tokens, NOT raw zinc values for surfaces.**
### 1.1 Theme-Mutable Tokens (use these for surfaces)
Defined in `globals.css` via Tailwind v4 `@theme` block. Automatically flip between light/dark.
@ -73,7 +73,7 @@ These are applied after CORS middleware on every response.
## 14. No Exposed Database Ports
Postgres and Redis must not expose host ports. They communicate exclusively over the internal Docker network (`molecule-monorepo-net`). Use `docker compose exec` for direct access during development.
Postgres and Redis must not expose host ports. They communicate exclusively over the internal Docker network (`molecule-core-net`). Use `docker compose exec` for direct access during development.
OSS contributors who run `molecule-core` locally do **not** need to authenticate to GHCR or AWS ECR. When the `MOLECULE_IMAGE_REGISTRY` env var is **unset**, the platform automatically:
1. Looks up the HEAD sha of `https://git.moleculesai.app/molecule-ai/molecule-ai-workspace-template-<runtime>` (single API call, no clone).
2. If a local image tagged `molecule-local/workspace-template-<runtime>:<sha12>` already exists, reuses it (cache hit).
3. Otherwise, shallow-clones the repo into `~/.cache/molecule/workspace-template-build/<runtime>/<sha12>/` and runs `docker build --platform=linux/amd64 -t <tag> .`.
4. Hands the SHA-pinned tag to Docker for `ContainerCreate`.
**First-provision build time:** 5–10 min on Apple Silicon (amd64 emulation). Subsequent provisions hit the cache and start in seconds. Cache is invalidated automatically when the template repo's HEAD moves.
**Currently mirrored on Gitea:** `claude-code`, `hermes`, `langgraph`, `autogen`. Other runtimes (`crewai`, `deepagents`, `codex`, `gemini-cli`, `openclaw`) fail with an actionable "not mirrored to Gitea" error pointing at the missing repo.
**Production tenants are unaffected** — every prod tenant sets `MOLECULE_IMAGE_REGISTRY` to its private ECR mirror via Railway env / EC2 user-data, so the SaaS pull path stays identical.
### Environment overrides
| Var | Default | Use case |
|-----|---------|----------|
| `MOLECULE_IMAGE_REGISTRY` | (unset) | Set to a real registry URL to switch from local-build to SaaS-pull mode. |
| `MOLECULE_LOCAL_TEMPLATE_REPO_PREFIX` | `https://git.moleculesai.app/molecule-ai/molecule-ai-workspace-template-` | Point at a fork. |
| `MOLECULE_GITEA_TOKEN` | (unset) | Required only if your fork has private template repos. |
### Verifying a switch from the GHCR-retag stopgap
Pre-fix, OSS contributors worked around the suspended GHCR org by manually retagging an `:latest` image. After this change, that workaround is **redundant**: simply unset `MOLECULE_IMAGE_REGISTRY` (or leave it unset), boot the platform, and provision a workspace. Logs will show:
```
Provisioner: local-build mode → using locally-built image molecule-local/workspace-template-claude-code:<sha12> for runtime claude-code
If you still see `ghcr.io/molecule-ai/...` in the boot log, double-check `env | grep MOLECULE_IMAGE_REGISTRY` — a stale shell export from the pre-fix workaround could keep SaaS-mode active.
"_comment":"Pin refs to release tags for reproducible builds. 'main' is OK while all repos are internal.",
"_comment":"OSS surface registry — every repo listed here MUST be public on git.moleculesai.app. Layer-3 customer/private templates are NOT registered here; they are handled at provision-time via the per-tenant credential resolver (see internal#102 RFC). 'main' refs are pinned to tags before broad rollout.",
# Safe: workflow always fires. If there are inner per-step if-
# gates (single-job-with-per-step-if pattern), the no-op step
# produces SUCCESS for the protected name — branch-protection-clean.
continue
fi
# Unsafe candidate — has top-level paths: AND we need to verify
# the per-step if-gate pattern is absent. Look for any `if:`
# referencing a paths-filter / detect-changes output inside the
# owning job's body. If at least one is present, classify as the
# single-job-with-per-step-if pattern (safe).
#
# The regex is intentionally anchored loosely — actual workflow
# YAML writes per-step if-gates as ` - if: needs.X.outputs.Y`
# (with the `-` step-marker between the leading spaces and the
# `if`). Anchoring on `^[[:space:]]+if:` would miss those.
if grep -qE "if:[[:space:]]+needs\.[a-zA-Z_-]+\.outputs\.""$f";then
# Per-step if-gates exist. Combined with top-level paths: this
# would be a buggy mix (the workflow might still skip entirely
# when paths exclude). Flag as unsafe — the safe pattern omits
# the top-level paths: filter altogether and gates per-step.
unsafe_findings+=("UNSAFE-MIX: $rel has top-level paths: AND per-step if-gates — when paths exclude the commit, the workflow doesn't fire and the required check '$name' is silently absent. Drop the top-level paths: filter; keep the per-step if-gates.")
failed=1
else
# Top-level paths: with no per-step if-gates: the canonical
# check-name parity bug.
unsafe_findings+=("UNSAFE-PATH-FILTER: $rel has top-level paths: filter and no per-step if-gates. When paths exclude the commit, no job emits the required check '$name' — branch protection waits forever. Either drop the paths: filter and add per-step if-gates against a detect-changes output, or add an aggregator-with-needs+always() job that emits '$name'.")
failed=1
fi
done
done<<<"$required_names"
if[["$failed" -eq 0]];then
echo"check_name_parity: OK — every required check name maps to a safe workflow shape."
exit0
fi
echo"check_name_parity: FOUND $((${#unsafe_findings[@]})) issue(s):" >&2
t.Errorf("preflightContainerHealth must call provisioner.IsRunning OR provisioner.RunningContainerName for the SSOT health check — see molecule-core#36. Found neither.")
}
ifcallsContainerInspectRaw{
t.Errorf("preflightContainerHealth carries a direct ContainerInspect call. This is the parallel-impl drift molecule-core#36 fixed. "+
"Either route through provisioner.IsRunning OR — if a new use case truly needs a different inspect — extend the helper's contract first and update this gate to allow the specific delta.")
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.