molecule-core/main branch protection requires the status-check context
'Secret scan / Scan diff for credential-shaped strings (pull_request)'
but the workflow lived only in .github/workflows/, which Gitea Actions
doesn't see — every PR's required-status-checks rollup left the context
in 'expected' / never-fires state, blocking merge.
Port to .gitea/workflows/secret-scan.yml. Drops:
- merge_group event (Gitea has no merge queue)
- workflow_call (no cross-repo reusable invocation on Gitea)
SELF exclude lists both .github/ and .gitea/ paths so a future sync
between them stays clean. Job + step names match the GitHub workflow
so the produced status-check context name matches branch protection
unchanged.
Same regex set as the runtime's pre-commit hook
(molecule-ai-workspace-runtime: molecule_runtime/scripts/pre-commit-checks.sh).
This unblocks PR #150 (audit-force-merge fan-out) and every future
PR on molecule-core/main.
Mirrors the canonical workflow shipped on internal#120 + #122. Same
shape: pull_request_target on closed, base.sha checkout, structured
JSON event to runner stdout that Vector ships to Loki on
molecule-canonical-obs.
REQUIRED_CHECKS env declares both molecule-core/main protected
contexts (sop-tier-check + Secret scan). Mirror against branch
protection if either is added/removed.
Verified end-to-end on internal: synthetic force-merge of internal#123
emitted incident.force_merge with all expected fields, indexable in
Loki via {host="molecule-canonical-1"} |= "incident.force_merge".
Tier: low (CI workflow, no platform code path).
Closes the post-PR-#174 self-review gap: the matched-pair contract
between ADMIN_TOKEN (server-side bearer gate) and NEXT_PUBLIC_ADMIN_TOKEN
(canvas client-side bearer attach) was descriptive only, living in a
.env file comment. Future agents/devs could re-misconfigure with one
of the two unset and silently 401 — every workspace API call refused
with no actionable diagnostic.
Adds checkAdminTokenPair() to canvas/next.config.ts, run after
loadMonorepoEnv() so it sees the post-load state. Two distinct
warnings (server-set/client-unset and the inverse) so an operator can
tell which half is missing without grep'ing. Empty string is treated
as unset so KEY= and unset KEY produce the same verdict.
Warn-only, not exit — production canvas Docker images bake these vars
at image-build time and a hard exit would turn a recoverable auth
issue into a crashloop. The console.error fires in `next dev`, the
standalone server's stdout, and the canvas Docker container logs —
the three places an operator looks when "everything 401s."
Tests pin exact stderr strings (per feedback_assert_exact_not_substring)
across 6 cases: both unset, both set, ADMIN_TOKEN-only, NEXT_PUBLIC-only,
empty-string-as-unset, and the empty-string-asymmetric mismatch.
Mutation-tested: flipping the if-condition from === to !== fails all 6.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Mirrors the canonical refactor: workflow YAML shrinks (env+invocation),
logic moves to .gitea/scripts/sop-tier-check.sh, debug echoes gated on
SOP_DEBUG, checkout@v6 pinned to base.sha.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Fans the security fix from internal#116 (cce89067) to molecule-core. Same
rationale: pull_request loads workflow from PR HEAD, allowing any
write-access contributor to rewrite the workflow file in their PR and
exfiltrate SOP_TIER_CHECK_TOKEN. pull_request_target loads from base
(main), neutralising the attack.
Verified post-merge on internal: synthetic PR rewriting the workflow to
print the token did NOT execute the modified version — main's
pull_request_target version ran instead. ATTACK_PROBE never fired.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase-1 fan-out of §SOP-6 enforcement to molecule-core. No branch
protection change in this PR — workflow runs and reports a status,
doesn't block any merge yet.
Branch protection update is the follow-up PR after the workflow
demonstrates a green run on its own PR, per the Phase 2 plan.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The org.import.started event was firing immediately after request body
bind, before the YAML at body.Dir was loaded. Result: payload.name was
"" whenever the caller passed `dir` (the common path — the canvas and
all live imports use dir, not inline template). Three started rows
already in the local platform's structure_events have empty name.
Fix: move the started emit (and importStart timestamp) to after the
YAML unmarshal / inline-template fallthrough, where tmpl.Name is
guaranteed populated.
Bonus: pre-parse error returns (invalid body, traversal-rejected dir,
file-not-found, YAML expansion fail, YAML unmarshal fail, neither dir
nor template provided) no longer emit an orphan started row — every
started is now guaranteed a paired completed/failed.
Verified live against running platform: re-imported molecule-dev-only,
new started row in structure_events carries
"Molecule AI Dev Team (dev-only)" instead of "".
Tests: full handler suite green (`go test ./internal/handlers/`).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Drops ~150 lines of duplicated cascade logic from the Delete HTTP
handler — workspace_crud.go's CascadeDelete (added in PR #137) and
Delete() were running the same #73 race-guard sequence (status update →
canvas_layouts → tokens → schedules → container stop → broadcast),
just with Delete() inlined and CascadeDelete owning the OrgImport
reconcile path.
CascadeDelete now returns the descendant id list (was: count) so
Delete() can drive the optional ?purge=true hard-delete against the
same set the cascade just touched.
Net diff: workspace_crud.go shrinks from ~270 lines in Delete() to
~75 lines (parse + 409 confirm gate + CascadeDelete call + stop-error
500 + purge block + 200 response). Behavior identical — same SQL
ordering, same #73 race guard, same response shapes. Three sqlmock
tests for the 0-children case gained one extra ExpectQuery for the
recursive-CTE descendants scan (the old inline code skipped that
query when len(children)==0; CascadeDelete walks unconditionally —
returns 0 rows, same end state, one extra cheap query).
Tests: full handler suite green (`go test ./internal/handlers/`).
Live-tested against the running local platform: DELETE on a fake
workspace returns `{"cascade_deleted":0,"status":"removed"}`,
fleet of 9 workspaces preserved, refactored handler matches the
prior wire-shape exactly.
Tracked as the PR #137 follow-up tech-debt item.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the additive-import zombie bug — re-running /org/import with a
tree shape that reparents same-named roles left the prior workspace
online because lookupExistingChild's dedupe is parent-scoped (different
parent_id → "different" workspace). Caught 2026-05-08 after a dev-tree
re-import left 8 orphans co-existing with the new tree on canvas until
manual cascade-delete.
Three layers in this PR:
- mode="reconcile" on /org/import — after the import loop, online
workspaces whose name matches an imported name but whose id isn't in
the result set are cascade-deleted. Default mode "" / "merge"
preserves existing additive behavior. Empty-set guards prevent
accidental "delete everything" if either array comes up empty.
- WorkspaceHandler.CascadeDelete extracted as a callable helper from
the existing Delete HTTP handler so OrgImport's reconcile path shares
the same teardown sequence (#73 race guard, container stop, volume
removal, token revocation, schedule disable, event broadcast). The
HTTP Delete handler still inlines the same logic; deduplication
tracked as tech-debt follow-up.
- emitOrgEvent(structure_events) records org.import.started +
org.import.completed with mode, created/skipped/reconcile_removed
counts, duration_ms, error. Replaces the lost-on-restart stdout-only
log shape for an audit-trail surface that's queryable by SQL. Closes
the "what happened at 20:13?" debugging gap that motivated this fix.
Verified live against the local platform: cascade-delete on an old
tree's removed root cleared 8 surviving orphans; mode="reconcile" with
a freshly-INSERTed fake orphan removed exactly the fake; idempotent
re-run of reconcile is a no-op (0 removed, no errors); structure_events
captures every started+completed pair with full payload.
7 new unit tests (walkOrgWorkspaceNames flat/nested/spawning:false/
empty-name; emitOrgEvent success + DB-error-swallow; errString). Full
handler suite green.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase 4 follow-up to template-claude-code PR #9 (2026-05-08 dev-tree wedge).
Pre-fix: applyRuntimeModelEnv unconditionally overwrote envVars["MODEL"]
with the MODEL_PROVIDER slug whenever payload.Model was empty (the restart
path). This silently wiped the operator'\''s explicit per-persona MODEL
secret on every restart.
Symptom: dev-tree workspaces booted correctly on first /org/import (the
envVars map was populated direct from the persona env file with both
MODEL=MiniMax-M2.7-highspeed and MODEL_PROVIDER=minimax), then on the
next Restart the MODEL secret got clobbered to literal "minimax" — a
provider slug, not a valid model id — and the workspace template'\''s
adapter failed to match any registry prefix, fell through to providers[0]
(anthropic-oauth), and wedged at SDK initialize.
Fix: resolution order in applyRuntimeModelEnv is now:
1. payload.Model (caller passed the canvas-picked model id verbatim)
2. envVars["MODEL"] (workspace_secret persisted from persona env)
3. envVars["MODEL_PROVIDER"] (legacy canvas Save+Restart shape)
Tests
-----
TestApplyRuntimeModelEnv_PersonaEnvMODELSecretPreserved — locks in
the new resolution order with four cases:
- MODEL secret wins over MODEL_PROVIDER slug (persona-env shape)
- MODEL secret wins even when same as MODEL_PROVIDER
- MODEL absent → fall back to MODEL_PROVIDER (legacy shape)
- Both absent → no MODEL set (no-op)
Existing TestApplyRuntimeModelEnv_SetsUniversalMODELForAllRuntimes
continues to pass — fix is strictly additive on the precedence chain.
Lets a workspace declare it (and its entire subtree) should be skipped
during /org/import. Pointer-typed `*bool` so we distinguish "explicitly
false" from "unset" (default = spawn).
## Use case
The dev-tree org template ships the full role taxonomy (Dev Lead with
Core Platform / Controlplane / App & Docs / Infra / SDK Leads, each with
their own engineering / QA / security / UI-UX children — 27 personas
total in a single import). Some setups need a smaller set:
- Local dev on a memory-constrained machine
- Demo / smoke runs that don't need the full org breathing
- Customer trials starting with leadership-only before fan-out
Pre-fix the only options were:
- Edit the canonical template (mutates shared state)
- Author a parallel slimmer template (duplicates structure)
- Manual workspace deprovision after full import (wasteful — already paid
the docker pull / build cost)
`spawning: false` is the per-workspace knob that solves this without
touching the canonical template structure.
## Semantics
- Unset: workspace spawns (current behaviour, no migration)
- `spawning: true`: explicitly spawns (same as unset)
- `spawning: false`: workspace is skipped AND every descendant is
skipped. The guard sits BEFORE any side effect in
createWorkspaceTree — no DB row, no docker provision, no children
recursion. A false-spawning subtree is genuinely a no-op except for
the log line. countWorkspaces still counts the subtree (so /org/templates
numbers reflect the full structure).
## Stage A — verified
Local dev-only template that wraps teams/dev.yaml (Dev Lead) with
children:[] cleared on the 5 sub-team yaml files, plus 3 floater
personas (Release Manager / Integration Tester / Fullstack Engineer).
/org/import returned 9 workspaces. Drop-in: same result via
`spawning: false` on each sub-tree root in the future.
## Stage B — N/A
Pure additive feature on the org-template handler. No SaaS deploy chain
implications.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
## org_import.go — persona env injection root-cause fix
The Phase-3 fix from earlier today (`feedback/per-agent-gitea-identity-default`)
introduced loadPersonaEnvFile to inject persona-specific creds into
workspace_secrets on /org/import. It passed `ws.Role` as the persona-dir
lookup key, but in our dev-tree org.yaml shape `role:` carries the
multi-line descriptive text the agent reads from its prompt
("Engineering planning and team coordination — leads Core Platform,
Controlplane, ..."), while `files_dir:` holds the short slug
(`core-lead`, `dev-lead`, etc.) matching
`~/.molecule-ai/personas/<files_dir>/env`.
isSafeRoleName silently rejected the multi-word role text → no persona
env loaded → every imported workspace booted with zero
workspace_secrets rows → no ANTHROPIC / CLAUDE_CODE / MINIMAX auth in
the container env → claude_agent_sdk wedged on `query.initialize()`
with a 60s control-request timeout.
After the fix, /org/import on the dev tree (27 personas) populates
8 workspace_secrets per workspace (Gitea identity + MODEL/MODEL_PROVIDER
+ provider-specific token), 5 of 6 leads boot online, and the
remaining wedges trace to a separate runtime-template-repo bug
(workspace-template-claude-code's claude_sdk_executor.py doesn't
dispatch on MODEL_PROVIDER=minimax — filed separately).
## Dockerfile.dev — docker-cli + docker-cli-buildx
Without these, every claude-code/tier-2 workspace POST fails-fast:
- docker-cli alone produces `exec: "docker": executable file not found`
- docker-cli alone (no buildx) fails on `docker build` with
`ERROR: BuildKit is enabled but the buildx component is missing or broken`
Both packages are now installed in the dev image; verified with
`docker exec molecule-core-platform-1 docker buildx version`.
## Stage A verified
Local /org/import dev-only path: 27 workspaces created, all 27 receive
persona env injection (8 secrets each — Gitea identity + provider creds).
Lead workspaces (claude-code-OAuth tier) boot online.
## Stage B — N/A
Local-dev-only path (docker-compose.dev.yml + dev image). Tenant EC2
provisioning uses Dockerfile.tenant (untouched).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Follow-up to the workspace-template visibility flip in 558e4fee. After
flipping the 5 private workspace-templates public (#192 root cause),
the harness-replays clone moved one step deeper to the org-templates
list, where 6 of 7 were also private. Hongming-confirmed flip plan:
- 5 of 6 (molecule-dev, free-beats-all, medo-smoke, molecule-worker-gemini,
ux-ab-lab) — flipped public per `feedback_oss_first_repo_visibility_default`.
These are unambiguously OSS-template-shape: generic README, no
customer-shaped names, no creds in content.
- 1 of 6 (reno-stars) — name itself is customer-shaped (would expose
customer/tenant identity). Kept private; removed from manifest.json
per Hongming. Will be handled at provision-time via the per-tenant
credential resolver designed in internal#102 (Layer-3 RFC).
Documents the OSS-surface contract in two places:
- manifest.json _comment: every entry MUST be public; Layer-3 lives elsewhere
- clone-manifest.sh comment block: rationale + the explicit ci-readonly
team-grant escape hatch (review-gated, not default).
Closes the second clone-fail layer of #192. Combined with 558e4fee +
the workspace-template visibility flips, the Pre-clone manifest deps
step should now succeed anonymously for the full registered set.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
5 of 9 workspace-template repos (openclaw, codex, crewai, deepagents,
gemini-cli) had been marked private with no team grant for AUTO_SYNC_TOKEN
bearer (devops-engineer persona). Pre-clone manifest deps step 404'd on
the first private repo encountered, failing every Harness Replays run.
Resolution path taken:
1. Flipped the 5 to public per `feedback_oss_first_repo_visibility_default`
— runtime/template/plugin repos default public; that's what makes them
OSS surface.
2. Scoped existing `ci-readonly` org team to legitimately-internal repos
only (compliance docs, RFCs-in-flight). Workspace templates removed
from it.
3. Filed internal#102 RFC for Layer-3 (customer-owned + marketplace
third-party private repos) — that's a different shape entirely;
needs per-tenant credential-resolver, not org-team grants.
This commit is a documentation-only touch on the workflow file to (a)
record the root cause inline next to the existing pre-clone-fail
narrative, (b) trigger a fresh Harness Replays run that should now pass
the clone step.
Closes#192.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Investigating molecule-core#129 failure mode #1 (claude-code "Agent
error (Exception)") needs the workspace's docker logs to find the
actual exception. The canary tears down the tenant on every failure,
so the workspace container is destroyed before anyone can SSM in.
Add a workflow_dispatch input `keep_on_failure: bool` (default false).
When true, sets `E2E_KEEP_ORG=1` for the canary script — its existing
debug path skips teardown, leaving the tenant + EC2 + CF tunnel + DNS
alive. Operator can then SSM into the workspace EC2 (via the same
flow as recover-tunnels.py) and capture `docker logs` from the
claude-code container.
Cron-triggered runs never set the input (it only exists on dispatch),
so unattended scheduled canaries always tear down — no risk of
unattended cost leak.
Operator workflow:
1. Dispatch canary-staging.yml with keep_on_failure=true
2. Watch CI; on failure (likely, given the 38h chronic red),
note the SLUG / TENANT_URL printed at step 1/11
3. SSM exec into the workspace EC2 (us-east-2) and run
`docker logs <claude-code-container>` to find the actual
exception traceback
4. Manually delete via DELETE /cp/admin/tenants/<slug> when done
(the script logs this reminder on E2E_KEEP_ORG=1 path)
Refs: molecule-core#129 (canary investigation)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Replaces the legacy nohup `go run ./cmd/server` setup with a fully
containerized local stack: postgres + redis + platform + canvas, all
with `restart: unless-stopped` so they survive Mac sleep/wake and
Docker Desktop daemon restarts.
## Changes
- **docker-compose.yml**
- `restart: unless-stopped` on platform/postgres/redis
- `BIND_ADDR=0.0.0.0` for platform — the dev-mode-fail-open default
of 127.0.0.1 (PR #7) made the host unable to reach the container
even with port mapping. Container netns is already isolated, so
binding all interfaces inside is safe.
- Healthchecks switched from `wget --spider` (HEAD → 404 forever
because /health is GET-only) to `wget -qO /dev/null` (GET).
Same regression existed on canvas; fixed both.
- **workspace-server/Dockerfile.dev**
- `CGO_ENABLED=1` → `0` to match prod Dockerfile + Dockerfile.tenant.
Without this, the alpine dev image fails with "gcc: not found"
because workspace-server has no actual cgo deps but the env was
forcing the cgo build path. Closes a divergence introduced in
9d50a6da (today's air hot-reload PR).
- **canvas/Dockerfile**
- `npm install` → `npm ci --include=optional` for lockfile-exact
installs that include platform-specific @tailwindcss/oxide native
binaries. Without these, `next build` fails with "Cannot read
properties of undefined (reading 'All')" on the
`@import "tailwindcss"` directive.
- **canvas/.dockerignore** (new)
- Excludes `node_modules` and `.next` so the Dockerfile's
`COPY . .` step doesn't clobber the freshly-installed container
node_modules with the host's (potentially stale or wrong-arch)
copy. This was the actual root cause of the canvas build break.
- **workspace-server/.gitignore**
- Adds `/tmp/` for air's live-reload build cache.
## Stage A verified
```
container status restart
postgres-1 Up (healthy) unless-stopped
redis-1 Up (healthy) unless-stopped
platform-1 Up (healthy, air-mode) unless-stopped
canvas-1 Up (healthy) unless-stopped
GET :8080/health → 200
GET :3000/ → 200
DB preserved: 407 workspace rows + 5 named personas
Persona mount: 28 dirs at /etc/molecule-bootstrap/personas
```
## Stage B — N/A
This is local-dev infrastructure only. None of these files ship to
SaaS tenants — production EC2s use `Dockerfile.tenant` + `ec2.go`
user-data, not docker-compose.
## Out of scope
- The decorative-but-broken `wget --spider` healthcheck has presumably
also been silently 404'ing on prod tenants. Ship a follow-up to
audit + fix the prod path; not done here to keep the PR scoped.
- Docker Desktop "Start at login" is a per-machine GUI setting that
must be toggled manually (Settings → General).
- The legacy heartbeat-all.sh that pinged 5 persona workspaces from
the host has been deleted (~/.molecule-ai/heartbeat-all.sh).
Per Hongming: each workspace is responsible for its own heartbeat.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The "Open issue on failure" step was failing on every canary run
because Gitea 1.22.6 doesn't expose /api/v1/actions endpoints
(per memory reference_gitea_actions_log_fetch). The threshold check
called github.rest.actions.listWorkflowRuns() to count consecutive
prior failures and gate issue creation behind 3 reds — that call
ALWAYS 404'd on Gitea, breaking the entire alerting step.
Net effect: the canary's own self-alerting was broken, so the
underlying staging regression went unflagged for 38h+
(2026-05-07 02:30 UTC → 2026-05-08 17:34 UTC, every cron tick red,
zero issues filed).
Fix: drop the consecutive-failures threshold entirely. File a
sticky issue on the FIRST failure; comment-on-existing handles
deduplication for subsequent failures. The auto-close-on-success
step is unchanged.
Why not a Gitea-compatible threshold (e.g., walk recent commit
statuses): comment-on-existing already gives ops a single
accumulating issue per regression streak. The threshold's purpose
was to avoid spamming on transient flakes — but with sticky issue
+ auto-close-on-green, transient flakes get one issue + one quick
close, which is fine signal. Filing on first failure is also
better UX: catches the regression in 30 min instead of 90 min.
Also: rewrote runURL from hardcoded https://github.com/... to
context.serverUrl so the link actually points at Gitea
(https://git.moleculesai.app) — was always broken on Gitea but
nobody noticed because the issue-filing step itself was broken.
Net: 21 insertions, 40 deletions. Removes WORKFLOW_PATH +
CONSECUTIVE_THRESHOLD env vars (no longer needed).
Tracked in: molecule-core#129 (failure mode 3 of 3)
Verification: yaml syntax-valid; no remaining github.rest.actions.*
calls; only github.rest.issues.* (all Gitea-supported per
memory feedback_persona_token_v2_scope).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes core#242 LOCAL surface. The PROD surface (CP user-data fetching
persona env files into tenant EC2's /etc/molecule-bootstrap/personas
via Secrets Manager) is filed as a follow-up.
WHAT THIS ADDS
Bind-mount on the platform service in docker-compose.yml:
${MOLECULE_PERSONA_ROOT_HOST:-${HOME}/.molecule-ai/personas}
→ /etc/molecule-bootstrap/personas (read-only)
Default source = ${HOME}/.molecule-ai/personas (the operator-host-mirrored
local dir populated by today's persona rotation work). Override via
MOLECULE_PERSONA_ROOT_HOST when running on a machine with a different
layout (CI runners, etc.).
WHY READ-ONLY
workspace-server only reads persona env files; never writes back. The
read-only mount enforces that contract — a hostile plugin install path
can't tamper with the persona credentials it's about to consume.
WHY THIS PATH MATCHES PROD
/etc/molecule-bootstrap/personas is the same in-container path the
prod tenant EC2 will use. Same code path (org_import.go::loadPersonaEnvFile)
reads the same file regardless of mode — local-dev parity with prod
per feedback_local_must_mimic_production.
STAGE A VERIFICATION
- docker compose config: resolves to /Users/hongming/.molecule-ai/personas
correctly (28 persona dirs visible at source path)
- Persona env file shape verified: dev-lead's env contains GITEA_USER,
GITEA_USER_EMAIL, GITEA_TOKEN_SCOPES, GITEA_SSH_KEY_PATH,
MODEL_PROVIDER=claude-code, MODEL=opus (lead tier matches Hongming's
2026-05-08 mapping)
- Full handler test suite green (TestLoadPersonaEnvFile_HappyPath +
7 sibling tests pass; rejection tests still catch path traversal)
- Build clean
STAGE B SKIPPED (with justification per § Skip conditions)
This change is config-only (docker-compose.yml volume addition). The
prod tenant EC2s do NOT use docker-compose.yml — they use CP user-data
+ ec2.go's docker run script. So this PR has no prod blast radius.
Stage B (staging tenant probe) would be checking 'is the platform
using the new compose mount' on a SaaS tenant — and SaaS tenants
don't run docker compose. The actual prod-surface change is the
follow-up issue.
PROD SURFACE — FOLLOW-UP FILED
Tenant EC2 user-data needs to fetch persona env files from operator
host (or AWS Secrets Manager per the established
feedback_unified_credentials_file pattern) and stage them at
/etc/molecule-bootstrap/personas inside the workspace-server container.
Touches molecule-controlplane/internal/provisioner/ec2.go user-data.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes core#115 partial. Schema-only change; the apply-endpoint filter
logic that reads this column lands with core#123 (drift detector +
queue + apply endpoint, the deferred follow-up of core#113).
Default 'production' so existing customers (Reno-Stars + any future
tenant) are default-safe. Synthetic dogfooding workspaces opt INTO
'canary' explicitly.
CHECK constraint pins the closed value set ('canary' | 'production') —
the apply endpoint's filter relies on the database to reject anything
else, so a future operator typo in PATCH /workspaces/:id ({update_tier:
'canery'}) returns a constraint violation, not silent fan-out to
nobody.
Partial index on canary rows since the apply-endpoint query path
('apply this update only to canary tier first') hits canary much more
often than production, and the production set is the much larger
default.
WHAT THIS DOES NOT DO (lands with core#123)
- PATCH endpoint to flip a workspace to canary
- The apply endpoint that consults the column
- Tests that exercise canary-vs-production fan-out
Schema-only foundation; same pattern as core#113 (workspace_plugins).
PHASE 4 SELF-REVIEW
Correctness: No finding — IF NOT EXISTS guards, DEFAULT clause means
existing rows get 'production' on migration apply.
Readability: No finding — comment block documents the tier semantics
+ the deferral to core#123.
Architecture: No finding — additive ALTER, partial index for the
expected access pattern.
Security: No finding — no code path; column constraint reduces blast
radius of bad PATCH input.
Performance: No finding — partial index minimizes write amplification
on the production-default rows.
REFS
core#115 — this issue
core#123 — apply endpoint follow-up (will exercise this column)
core#113 — version subscription DB foundation (sibling pattern)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes core#113 partial. Adds the DB foundation for the
version-subscription model. Drift detection + queue + admin apply
endpoint are follow-up scope (separate PR; filed as a new issue).
WHY THIS PR ONLY GETS US PART-WAY
Plugin install state today is filesystem-only — '/configs/plugins/<name>/'
inside the container. There's no DB record of 'plugin X installed at
workspace W from source S, tracking ref T'. That makes drift detection
impossible: nothing to compare upstream tags against.
This PR adds the table + the install-endpoint hook that writes to it.
With baseline tags now on every plugin (post internal#92), the table
starts collecting tracked-ref values immediately on the next install.
The actual drift-check job + queue + apply endpoint layer on top.
WHAT THIS ADDS
workspace_plugins table:
workspace_id FK → workspaces(id) ON DELETE CASCADE
plugin_name canonical name from plugin.yaml
source_raw full source URL the install used
tracked_ref 'none' | 'tag:vX.Y.Z' | 'tag:latest' | 'sha:<full>'
installed_at, updated_at
installRequest gains optional 'track' field (defaults to 'none').
Install handler upserts the workspace_plugins row after delivery
succeeds. DB write failure is logged but doesn't fail the install
(the plugin IS in the container; surfacing 500 misleads the caller).
validateTrackedRef enforces the closed set of accepted shapes:
'none' | 'tag:<non-empty>' | 'sha:<non-empty>'
Bare values like 'latest' / 'main' / version-strings without
prefix are rejected — the drift detector keys on prefix to know
what kind of resolution to do.
WHAT THIS DOES NOT ADD (filed separately)
- Drift detector job (cron / on-demand) that scans
'WHERE tracked_ref != none' rows and queues updates on upstream drift
- plugin_update_queue table (separate migration once detector lands)
- GET /admin/plugin-updates-pending and POST .../apply endpoints
- Tier-aware apply (core#115 — composes here)
PHASE 4 SELF-REVIEW (FIVE-AXIS)
Correctness: No finding — install endpoint behavior unchanged for
callers that don't pass 'track'. DB write is best-effort + logged
on failure. validateTrackedRef rejects ambiguous bare strings.
Readability: No finding — separate file plugins_tracking.go isolates
the new concern; install handler delta is a single 4-line block.
Architecture: No finding — additive table; existing schema untouched.
Migration 20260508160000_* uses the timestamp-prefixed convention.
Security: No finding — INSERT params via placeholders (no string
interpolation). validateTrackedRef rejects unexpected shapes before
the column constraint would.
Performance: No finding — one extra ExecContext per install. Install
is already seconds-scale (network fetch + tar + docker exec); rounds
to noise.
TESTS (1 new, all green)
TestValidateTrackedRef — pin closed set + structural validators
REFS
core#113 — this issue (foundation only; drift+queue+apply = follow-up)
internal#92, internal#93 — plugin/template baseline tags (now exists for tracking)
core#114 — atomic install (this PR composes — no atomicity regression)
core#115 — canary tier filter (will key off the same DB foundation)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes molecule-core#112. Composes with #114 (atomic install).
Before issuing restartFunc, classify the diff between staged and live:
- skill-content-only: only **/SKILL.md content changed
→ skip restart (Claude Code re-reads SKILL.md on
each Skill invocation; no in-memory cache)
- cold: anything else
→ restartFunc as before
(hooks/settings load at session start;
plugin.yaml is structural; added/removed files
require a fresh load)
DETECTION
- Hash every regular file in staged tree (host filesystem, sha256)
- Hash every regular file in live tree (in-container via docker exec
sh -c 'cd <livePath> && find . -type f -print0 | xargs -0 sha256sum')
- .complete marker dropped from comparison (mtime varies install-to-
install; including it would force-cold every reinstall)
- File added/removed → cold
- File content differs but isn't SKILL.md → cold
- All differences are SKILL.md basenames → skill-content-only
DEFAULTS COLD
- First install (no live tree) → cold
- Live tree read failure → cold (conservative; never hot-reload speculatively)
- Symlinks skipped during hash (same posture as tar walker)
PHASE 4 SELF-REVIEW
Correctness: No finding — all error paths default to cold; never
falsely classify as skill-content-only. The .complete drop is
a deliberate exception (the marker is bookkeeping, not content).
Readability: No finding — single-purpose helpers (hashLocalTree,
hashContainerTree, isSkillMarkdown, shQuote) each do one thing.
The classifier itself reads as 'compare set, then walk diff with
isSkillMarkdown gate.'
Architecture: No finding — composes existing execAsRoot primitive;
new helpers in plugins_classifier.go don't touch any other
handler. Old behavior unchanged when live read fails.
Security: No finding — shQuote single-quotes any non-trivial path,
pluginName comes from validatePluginName-validated source, and
the docker exec command takes the path as a single arg (xargs -0
handles binary-safe path delimiting). Symlinks skipped.
Performance: No finding — adds two tree walks (host + container)
per install. Container walk is one docker exec call returning
sha256 lines; for typical plugins (~10-50 files) round-trip is
~100ms. Versus the saved ~5-10s of restart on a hot-reloadable
update, this is a clear win.
TESTS (4 new, all green; full handler suite green)
TestIsSkillMarkdown — basename match, case-sensitive
TestHashLocalTree_StableHash — re-hash same dir = same map
TestHashLocalTree_SymlinkSkipped — hostile link doesn't poison classifier
TestShQuote — quoting boundary for shell injection safety
REFS
molecule-core#112 — this issue
molecule-core#114 — atomic install (.complete marker added there)
Reno-Stars iteration safety (Hongming 2026-05-08)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes molecule-core#114 for the docker (local-OSS) path.
EIC (SaaS) path tracked as a follow-up — same shape, different
exec primitives (ssh vs docker exec); shipping both in one PR
doubles the test surface.
THE FOUR-STEP DANCE
1. STAGE — docker.CopyToContainer extracts tar into
/configs/plugins/.staging/<name>.<ts>/
2. SNAPSHOT — if /configs/plugins/<name>/ exists, mv to
/configs/plugins/.previous/<name>.<ts>/
3. SWAP — atomic mv staging → live (single rename(2))
4. MARKER — touch /configs/plugins/<name>/.complete
Workspace-side plugin loaders should refuse to load any plugin dir
without .complete (separate small change, not in this PR — the marker
write is the necessary precursor; consumer side is a follow-up so
existing-content plugins don't break before they're re-installed).
ROLLBACK
- Stage failure: rm -rf staging dir; live untouched
- Snapshot failure: rm -rf staging dir; live untouched (no rename happened)
- Swap failure with snapshot present: mv previous back to live
- Swap failure (no snapshot): rm -rf staging; live (which never
existed) stays absent
- Marker failure: content already in place, log loudly with manual
recovery hint (touch <plugin>/.complete) — don't roll back since
the new content is what we wanted, just unmarked
GC
Best-effort delete of previous-version snapshot after successful
marker write. Failures non-fatal — next install or a separate
sweeper reclaims. Sweeper for stale .previous/* across reboots is
follow-up scope.
CONCURRENCY
Each install gets a unique stamp (UTC second precision), so two
concurrent reinstalls land in distinct staging dirs and the second
swap simply overwrites the first's live result. The atomicity is
per-install, not cross-install — by design (the platform serializes
POST /workspaces/:id/plugins via Go-side semaphore upstream of
this code, so cross-install collisions don't reach here).
CHANGES
+ plugins_atomic.go — installVersion + atomicCopyToContainer
+ plugins_atomic_tar.go — tarWalk/tarHostDirWithPrefix helpers
+ plugins_atomic_test.go — 5 unit tests (paths, stamp shape,
tar happy path, symlink-skip, prefix
normalization). All green.
~ plugins_install_pipeline.go::deliverToContainer — swap
copyPluginToContainer call to atomicCopyToContainer
Old copyPluginToContainer is retained (still called by Download()) so
this PR is purely additive on the install path; no public API change.
PHASE 4 SELF-REVIEW (FIVE-AXIS)
Correctness: Required (addressed) — swap-failure rollback writes mv
of previous back to live before returning the error; if rollback
itself fails, we wrap both errors and surface the combined fault.
Marker-write failure is treated as content-landed-but-unmarked
(LOG, don't roll back the new content).
Readability: No finding — installVersion path methods make the
/staging/.previous/live/marker layout obvious from one struct.
tarWalk extracted from the inline filepath.Walk in
plugins_install_pipeline.go for testability.
Architecture: No finding — atomicCopyToContainer composes existing
execAsRoot / docker.CopyToContainer primitives; no new dependencies.
Old copyPluginToContainer kept for Download() — single responsibility
per function.
Security: No finding — symlinks still skipped during tar walk
(defense vs hostile plugin escaping its own dir). Marker writes
use composeable path.Join, no user input touches the path.
Performance: No finding — adds ~3 docker exec calls per install
(mkdir, mv-snapshot, mv-swap, touch — actually 4) on top of the
one CopyToContainer. Each exec ~50-100ms in practice; install
end-to-end was already seconds-scale, this rounds to noise.
REFS
molecule-core#114 — this issue
Companion: molecule-core#112 (hot-reload classifier — depends on .complete marker)
Companion: molecule-core#113 (version subscription — uses install machinery)
EIC follow-up: separate issue to be filed for SaaS path parity
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes core#116. Brings local-dev iteration parity with the canvas's
Turbopack HMR — edit a Go file, see the platform restart in <5s
instead of running 'docker compose up --build' (~30s) per change.
USAGE
make dev # docker compose with air-driven live reload
make up # production-shape stack (no air, normal Dockerfile)
WHAT THIS ADDS
workspace-server/.air.toml — air watch config
workspace-server/Dockerfile.dev — air-on-golang:1.25-alpine, dev-only
docker-compose.dev.yml — overlay swapping platform service
to Dockerfile.dev + bind-mounting
workspace-server/ source
Makefile — make {dev,up,down,logs,build,test}
WHAT THIS DOES NOT TOUCH
workspace-server/Dockerfile (production multi-stage build)
docker-compose.yml (prod-shape stack)
CI workflows (build prod image directly)
Tenant deployment / SaaS (image swap stays the model)
Pure additive. Existing 'docker compose up' path unchanged; production
stays on the static binary. Air install pinned via go install at image
build time so the dev image is reproducible-enough for local use (we
don't pin air to a SHA — the dev image is rebuilt locally and updates
opportunistically).
PHASE 4 SELF-REVIEW (FIVE-AXIS)
Correctness: No finding — additive change, no existing path modified.
.air.toml watches .go + .yaml under workspace-server/, excludes
_test.go and tests dir so test edits don't trigger rebuild.
Dockerfile.dev mirrors prod's 'go mod download' so first rebuild
is fast.
Readability: No finding — three small files plus a Makefile, each
with header comments explaining the WHY, not just the WHAT. The
Makefile uses the standard ## help-target pattern.
Architecture: No finding — overlay pattern (docker-compose.dev.yml
on top of docker-compose.yml) is the standard compose convention
for env-specific overrides. Doesn't fork the prod path.
Security: No finding because no production code path; dev-only image
isn't built in CI and isn't published to ECR.
Performance: No finding — air debounce=500ms, exclude_unchanged=true
so a save that doesn't change content is a no-op rebuild.
REFS
core#116 — this issue
Companion: core#117 (workspace-side config-watcher for hot-reload of
config.yaml) — different scope; this issue is platform-only.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
TestStartSweeper_TransientErrorDoesNotCrashLoop leaks an in-flight
metric write across the test boundary: cycleDone fires inside the
fake's Sweep defer (before Sweep returns), waitForCycle returns
immediately after, cancel() lands, but the goroutine still has
metrics.PendingUploadsSweepError() to execute. Whether that write
happens before or after the next test's metricDelta() baseline read
is a coin-flip on slow CI hosts.
Outcome: TestStartSweeper_RecordsMetricsOnSuccess fails with
"error counter delta = 1, want 0" — looks like a real bug, isn't.
Instrumented analysis (per the file's existing waitForMetricDelta
docstring covering the same shape) confirms the metric IS getting
recorded, just AFTER the next test reads its baseline.
The Records* tests already use waitForMetricDelta to close this race
on their own assertions. This change extends the same shape to
TransientErrorDoesNotCrashLoop so it doesn't poison subsequent tests'
baselines.
Verified by running `go test -race -count=20 ./internal/pendinguploads/...`
locally — passes deterministically.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Trunk-based migration final cleanup for molecule-core. The 6 workflows
deleted here all existed to manage the staging↔main branch dance that
trunk-based makes obsolete:
- auto-promote-staging.yml fast-forward staging→main on green
- auto-promote-on-e2e.yml alt promote path on E2E green
- auto-promote-stale-alarm.yml alarm if staging promotion stalls
- auto-sync-main-to-staging.yml sync main→staging after UI merges
- auto-sync-canary.yml dry-run probe of the auto-sync
token+push path
- retarget-main-to-staging.yml rebase open PRs onto staging
After Phase 3A (PR #108 promoted 5 staging-only feature PRs to main)
and Phase 3B (PR #109 dropped staging-branch triggers from the 4 e2e
workflows), main is the only branch the CI cares about. None of the
above workflows have anything to do; they're 1977 lines of dead Go-time-
no-Gitea-time-yes code.
Rollback: `git revert` this commit to restore the workflows. They still
work mechanically; trunk-based just doesn't need them.
The `staging` branch on the remote is deleted in a follow-up step
(`git push origin --delete staging`) after this PR merges, so reviewers
can confirm CI runs cleanly on the new shape before the ref disappears.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Wires the 28 dev-tree persona credentials minted 2026-05-08 into the
workspace-secrets path used by org_import. When a workspace.yaml carries
`role: <name>`, the importer now reads
$MOLECULE_PERSONA_ROOT/<role>/env (default
/etc/molecule-bootstrap/personas/<role>/env, populated by the bootstrap
kit on the tenant host) and merges the role's GITEA_USER /
GITEA_TOKEN / GITEA_TOKEN_SCOPES / GITEA_USER_EMAIL /
GITEA_SSH_KEY_PATH into the same envVars map that already feeds
workspace_secrets via parseEnvFile + crypto.Encrypt + INSERT.
PRECEDENCE
Persona env is the LOWEST layer:
0. Persona env (per-role)
1. Org root .env (shared)
2. Workspace .env (per-workspace)
Each later layer overrides the previous, so a workspace .env can
pin a different GITEA_TOKEN if it ever needs to (testing, override).
WHY THIS LAYERING
Workspaces should boot with the role's identity by default. .env
files stay the explicit-override mechanism for the (rare) case where
a workspace needs to deviate. No new behavior for workspaces with no
role: persona load is silent no-op when ws.Role is empty or unsafe.
SECURITY
isSafeRoleName accepts only [A-Za-z0-9_-]+ (no '..', '/', or
separators) — admin-only construct, but defense-in-depth keeps the
persona dir shape invariant. Test
TestLoadPersonaEnvFile_RejectsTraversal pins the rejection set against
a planted target file.
OPERATOR-HOST CONTRACT
The 28 persona env files live at /etc/molecule-bootstrap/personas/<role>/env
(mode 600, owner root:root) with the per-role token-scope tailoring
Hongming approved 2026-05-08 (D5). Synced via task #241. Override via
MOLECULE_PERSONA_ROOT for tests + non-prod hosts.
TESTS (7 new, all green)
TestLoadPersonaEnvFile_HappyPath — typical persona-env shape
TestLoadPersonaEnvFile_MissingDir — silent no-op when file absent
TestLoadPersonaEnvFile_EmptyRole — silent no-op when role empty
TestLoadPersonaEnvFile_RejectsTraversal — planted file unreachable
via '../../etc/passwd' etc.
TestLoadPersonaEnvFile_DefaultRoot — falls back to /etc/...
TestLoadPersonaEnvFile_OverwritesEmptyMap
TestIsSafeRoleName_Acceptance — positive + negative role names
PHASE 4 SELF-REVIEW (FIVE-AXIS)
Correctness: No finding — additive change, silent no-op on the ws.Role==''
path covers every existing workspace; tests cover happy path + each
rejection mode + missing-dir.
Readability: No finding — helper sits next to parseEnvFile in
org_helpers.go with a comment block explaining WHY persona is
lowest precedence.
Architecture: No finding — fits the existing 'merge .env into envVars
then INSERT INTO workspace_secrets' pattern that's been in place
since the .env-driven workspace secrets feature; no new dependencies,
no new tables.
Security: Required (addressed) — path traversal blocked by
isSafeRoleName. No finding beyond that since persona files are
admin-managed and the helper does not log token values.
Performance: No finding — one extra os.ReadFile per workspace at
import time; amortized over workspace lifetime, cost is negligible.
REFS
internal#85 — RFC for SOP Phase 4 + structured Five-Axis (parent context)
Saved memories: feedback_per_agent_gitea_identity_default,
feedback_unified_credentials_file
Task #241 — operator-host sync (already DONE; populated 28 dirs)
Task #242 — this PR
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Harness Replays job failed at "dependency failed to start: container
harness-tenant-alpha-1 is unhealthy" — that is not caused by this
merge (which adds workspace-server/internal/handlers code, not
container infra). Retry to confirm it was a transient environmental
issue (likely operator-host load/disk per internal#78).
Trunk-based migration: main is the only branch. Update 4 workflows
that fired on staging-branch pushes to fire on main instead.
- e2e-staging-canvas.yml: drop staging from push + pull_request
- e2e-staging-external.yml: drop staging from push + pull_request
- e2e-staging-saas.yml: drop staging from push + pull_request,
update header comment that references the (now-obsolete)
staging→main auto-promote flow
- redeploy-tenants-on-staging.yml: workflow_run.branches changes
from [staging] to [main] so the tenant redeploy fires when
publish-workspace-server-image runs on main
Workflows that target the staging tenant FLEET (canary-staging.yml,
e2e-staging-sanity.yml) are not changed — they fire on cron, the word
"staging" in their filenames refers to the deployment target environ-
ment, not the git branch.
Lands as Phase 3b after #108 promotes the 5 staging-only feature PRs
(Phase 3a). Phase 3c deletes the obsolete promote/sync workflows
(auto-promote-staging, auto-sync-main-to-staging, etc.) plus the
staging branch itself, after we no-op-verify both Phase 3a and 3b
green.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This was supposed to fast-forward when each PR merged on staging,
but auto-promote-staging.yml has not been firing reliably on Gitea
since the GitHub suspension. Result: main is missing 5 substantive
feature PRs that landed on staging between 2026-04-29 and 2026-05-07:
- #102: test(org-include) symlink-based subtree composition contract
- #103: test(local-e2e) dev-department extraction end-to-end
- #104: fix(provisioner)+test EvalSymlinks templatePath; stage-2 e2e
- #105: feat(org-import) !external cross-repo subtree resolver (#222)
- #106: test(org-external) integration + e2e for !external resolver
Each PR was independently reviewed and CI-green at staging-merge time;
this commit promotes the merged state atomically. Use git log on main
after the merge to see the original PR-merge commits preserved.
Sister work: Phase 3 of internal#81 (trunk-based migration). Workflow
trigger updates land in a follow-up PR; staging-branch deletion happens
after a no-op verification deploy.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Five-Axis self-review pass on the !external resolver work (PRs #105+#106) caught three real issues that the unstructured 3-weakest review missed:
1. Cache validity gap — partial cache writes looked complete
2. Token persistence — token in URL userinfo got persisted to .git/config
3. Misleading function name post-refactor
This PR fixes all three:
- .complete marker file written atomically; wipe-and-refetch on partial cache
- Token via -c http.extraHeader, never embedded in URL
- Defense-in-depth ref .. deny (was already validated by repoSafeRefRegex but explicit + tested)
- Renamed buildCloneURL -> buildExternalCloneURL (collision with artifacts.go), rewriteFilesDirAndIncludes -> rewriteFilesDir
- Removed unused redactToken/shortHash helpers and crypto/sha1, encoding/hex, fmt imports
Approved by platform-engineer 2026-05-08T12:55Z.
Self-review of molecule-core PR #105 + #106 (the !external resolver
chain) surfaced 3 real correctness/security gaps and 2 readability
nits. Fixes all four in one PR since they're the same file's hardening.
(1) TOKEN LEAKAGE — fixed
Before: gitFetcher built clone URLs with auth in userinfo
(https://oauth2:TOKEN@host/repo.git). Two leak paths:
a. Token persisted in cloned repo's .git/config
b. Token could appear in clone error output captured via
cmd.CombinedOutput()
After: clone URL has no userinfo (https://host/repo.git). Auth is
layered on via -c http.extraHeader=Authorization: token ...
which sends the header per-request without persisting. Plus a
redactToken() pass over any error string before it surfaces in
fmt.Errorf, as belt-and-braces.
Tradeoff: token now visible in 'ps aux' for the duration of the
git child process (same as before via env var), but no longer
in any persistent state.
(2) CACHE-VALIDITY FOOTGUN — fixed
Before: cache-hit was 'cacheDir/.git exists'. A clone interrupted
after .git was created but before content finished writing would
leave a partially-written cache that subsequent imports treated
as hit, returning stale/incomplete content forever (no self-heal).
After: cache-hit also requires a .complete marker file written
only AFTER successful clone+rename. Partially-written cache is
treated as cache-miss and re-fetched cleanly (after RemoveAll
on the partial dir to avoid blocking the new clone's mkdir).
(3) REF '..' DENY — fixed
Before: safeRefPattern '^[a-zA-Z0-9_./-]+$' allowed '..' as a
substring. Git itself rejects most refs containing '..', but
defense-in-depth says don't depend on the downstream tool's
validation when sanitizing input at the boundary.
After: explicit strings.Contains(ref.Ref, '..') check.
(4) NAMING CLEANUP — fixed
Before: rewriteFilesDirAndIncludes() — name claims to rewrite
!include scalars but doesn't (we removed that during PR-A
development; double-prefix bug). Misleading for readers.
After: rewriteFilesDir(). Docstring updated to explicitly explain
why !include paths are NOT rewritten (relative to subDir, naturally
inside cache).
Also: removed unused buildAuthedURL() (replaced by
buildExternalCloneURL + authConfigArgs split), removed unused
shortHash() helper (replaced by os.MkdirTemp), removed unused
crypto/sha1 + encoding/hex + fmt imports, removed stray
'_ = fmt.Sprint' line in integration test.
NEW TESTS
- TestGitFetcher_RejectsRefWithDoubleDot (defense-in-depth on ref input)
- TestGitFetcher_CacheValidatedByCompleteMarker (partial cache → re-fetch)
VERIFIED LOCALLY 2026-05-08
Full ./internal/handlers/ suite: ok (7.8s, 14 external-resolver tests
+ all existing tests). Two new tests cover the two new behaviors.
Refs:
internal#77 — extraction RFC
molecule-core#105 (resolver), #106 (tests) — original implementation
Hongming code-review-and-quality skill invocation 2026-05-08 + 'fix all'
PR-B (local bare-git integration, task #233):
workspace-server/internal/handlers/org_external_integration_test.go
Three tests using git's GIT_CONFIG_COUNT/KEY/VALUE env-var-injected
insteadOf URL rewrite — process-scoped, no ~/.gitconfig pollution:
- TestGitFetcher_RealClone_LocalRedirect: full resolver chain end-to-
end with REAL git clone against a local bare-repo, asserts cache
population + content materialization + path rewrite + cache-hit on
second invocation.
- TestGitFetcher_RealClone_BadRefFails: nonexistent ref surfaces
git's error cleanly through the ls-remote step.
- TestGitFetcher_DirectFetch_CacheHit: gitFetcher.Fetch direct
invocation (no resolver wrapping); verifies cache-hit returns
same dir + same SHA, no clobber.
Production code untouched — insteadOf rewrite makes the production
gitFetcher think it's cloning from Gitea, but git rewrites at clone
time to file://<barePath>. Tests the real shell-out + parsing.
PR-C (live Gitea e2e, task #234):
workspace-server/internal/handlers/local_e2e_dev_dept_test.go
TestLocalE2E_ExternalDevDepartment — minimal parent template that
uses !external against the LIVE molecule-ai/molecule-dev-department
repo. No symlink, no /tmp/local-e2e-deploy fixture. Composition
resolves over network at import time.
Asserts:
- 28+ dev-tree workspaces resolve through the fetched cache
(matches the count from TestLocalE2E_DevDepartmentExtraction)
- Q1 placement: 'Documentation Specialist' present (under app-lead)
- Q2 placement: 'Triage Operator' present (under dev-lead)
- Every workspace's files_dir is cache-prefixed (proves rewrite ran)
- Every workspace's resolveInsideRoot+Stat succeeds
(would fail provisioning if not)
Skipped if Gitea unreachable (TCP probe to git.moleculesai.app:443)
or git binary absent — won't false-fail offline runners.
VERIFIED LOCALLY 2026-05-08:
--- PASS: TestGitFetcher_RealClone_LocalRedirect (0.26s)
--- PASS: TestGitFetcher_RealClone_BadRefFails (0.15s)
--- PASS: TestGitFetcher_DirectFetch_CacheHit (0.23s)
--- PASS: TestLocalE2E_ExternalDevDepartment (0.55s)
workspaces resolved through !external: 28
Full ./internal/handlers/ test suite: ok (no regressions)
Together with PR-A's unit tests (#105), the !external resolver is now
covered at three layers:
- unit (fakeFetcher injection): allowlist, validation, path rewrite
- integration (real git, local bare-repo): clone, cache, ls-remote
- e2e (real git, live Gitea, live dev-department): full chain
Refs:
internal#77 — extraction RFC (Phase 3a phasing in comment 1995)
task #233 (PR-B), task #234 (PR-C)
Hongming GO 2026-05-08 ('do PR-B/C/D')
Adds gitops-style cross-repo subtree composition to the platform's
org-template importer. Replaces (eventually) the operator-side
filesystem symlink approach shipped in PR #5.
DESIGN
See internal#77 comment 1995 for the full design doc + decision points
agreed with Hongming 2026-05-08.
Schema: a `!external`-tagged mapping anywhere a workspace entry is
allowed (workspaces:, roots:, children:):
- !external
repo: molecule-ai/molecule-dev-department
ref: main
path: dev-lead/workspace.yaml
url: git.moleculesai.app # optional; default = MOLECULE_EXTERNAL_GITEA_URL or git.moleculesai.app
At resolve time the platform fetches the repo at ref into a content-
addressable cache under <orgBaseDir>/.external-cache/<repo>/<sha>/,
loads <cacheDir>/<path>, recursively resolves nested !include /
!external in the loaded subtree, then rewrites every files_dir scalar
in the fully-resolved subtree to be cache-prefixed. Downstream
pipeline (resolveInsideRoot, plugin merge, CopyTemplateToContainer)
sees ordinary in-tree paths.
IMPLEMENTATION
- org_external.go: ExternalRef type, fetcher interface (gitFetcher
production + injectable for tests), resolveExternalMapping resolver,
rewriteFilesDirAndIncludes path-rewrite walker, allowlistedHostPath
+ safeRefPattern + safeRepoCacheDir validation helpers.
- org_include.go: 4-line hook in expandNode dispatching MappingNode
with Tag=="!external" to resolveExternalMapping.
- org_external_test.go: 8 unit tests with fakeFetcher injection
(no network):
* happy path (top + nested workspace files_dir cache-prefixed)
* allowlist rejection (github.com/foo/bar)
* path-traversal rejection (../../etc/passwd)
* malformed ref rejection ("main; rm -rf /")
* missing required fields (repo / ref / path)
* rewriteFilesDirAndIncludes basic + idempotent
* allowlistedHostPath env-override + glob
Path rewrite ONLY rewrites files_dir scalars. !include scalars are
NOT rewritten — they resolve relative to their containing file's
directory, which post-fetch is naturally inside the cache, so
relative !includes Just Work without modification.
ALLOWLIST + AUTH
- Default allowlist: git.moleculesai.app/molecule-ai/.
- Override: MOLECULE_EXTERNAL_REPO_ALLOWLIST (comma-separated
prefixes; trailing /* or / supported).
- Auth: MOLECULE_GITEA_TOKEN env var injected into clone URL.
Optional — falls back to unauthenticated for public repos.
- Reject: malformed refs, path-traversal, non-allowlisted hosts.
CACHE
- Location: <orgBaseDir>/.external-cache/<safe-repo>/<sha>/.
Operators add to .gitignore.
- Content-addressable: same (repo, sha) reuses cache, no overwrite.
- Atomic clone via tmp-then-rename.
- Concurrency: race-tolerant — last-writer-wins on same SHA.
GC out of scope for v1 (filed as parked follow-up).
SECURITY (per SOP Phase 2)
Untrusted yaml input — all validated:
repo: allowlist (default molecule-ai/* on Gitea host)
ref: ^[a-zA-Z0-9_./-]+$ regex (rejects shell injection)
path: relative-and-down-only (rejects ../escape)
Auth: read-only token scoped to allowed orgs.
Recursion: maxExternalDepth=4 (vs maxIncludeDepth=16) to limit
network fan-out cost.
Cache poisoning: per-(repo, sha) content-addressable; can't poison
across SHAs.
Trust boundary: cloned content treated identically to a sibling-
cloned subtree (same model as current symlink approach).
VERSIONING / BACKWARDS COMPAT
Pure additive. Existing !include and inline workspaces unchanged.
Existing dev-lead symlink (parent template PR #5) keeps working.
Migration of parent template to !external is a separate PR-D.
No DB schema change. No public API change.
VERIFIED LOCALLY
go test ./internal/handlers/ → ok (5.2s, all 8 new tests + existing)
Stub fetcher injection lets unit tests cover the resolver +
path-rewrite logic without network. PR-B (follow-up) adds an
integration test against a local bare-git repo. PR-C adds the
real-Gitea e2e test against the live dev-department repo.
Refs:
internal#77 — extraction RFC (comment 1995 = Phase 1+2 design)
task #222 — this PR is Phase 3a (PR-A in the design's phasing)
Hongming GO 2026-05-08 ('go' on 4 decision points + design)
Two changes that fall out of one root cause discovered while preparing
the local platform spin-up for the dev-department extraction (internal#77):
PROBLEM
CopyTemplateToContainer's filepath.Walk is called with templatePath
set to the workspace's resolved files_dir. With the cross-repo
symlink composition shipped in PR #5 (parent template's
dev-lead → ../molecule-dev-department/dev-lead/), the Dev Lead
workspace's files_dir is literally 'dev-lead' — i.e. the symlink
itself, not a path THROUGH the symlink.
filepath.Walk does not descend into a symlink leaf — it Lstats the
root, sees a symlink (mode bit set, not a directory), emits exactly
one entry, and returns. Result: the workspace's /configs/ tar would
ship empty. Other 38 workspaces are fine because their files_dir
paths just TRAVERSE the symlink (path resolution handles intermediate
symlinks via Lstat traversal); only the leaf-is-symlink case breaks.
FIX
workspace-server/internal/provisioner/provisioner.go:
Call filepath.EvalSymlinks on templatePath before filepath.Walk.
Resolves the leaf-symlink case for ALL templates, not just dev-dept.
Security: templatePath has already passed resolveInsideRoot's
path-string check at the call site; the trust boundary is the
operator-side /org-templates/ filesystem layout, not this
resolution step.
TEST
workspace-server/internal/handlers/local_e2e_dev_dept_test.go:
New TestLocalE2E_FilesDirConsumption — stage-2 of the local e2e.
For every workspace in the resolved OrgTemplate, asserts:
1. resolveInsideRoot(orgBaseDir, ws.FilesDir) succeeds.
2. os.Stat on the result returns a directory.
3. filepath.Walk after EvalSymlinks (mirroring the platform fix)
emits at least one file.
4. At least one workspace marker exists (workspace.yaml,
system-prompt.md, or initial-prompt.md).
Exercises the SECOND half of POST /org/import that
TestLocalE2E_DevDepartmentExtraction (PR #103) didn't cover.
VERIFIED LOCALLY (2026-05-08, against post-extraction Gitea state):
--- PASS: TestLocalE2E_FilesDirConsumption (0.05s)
checked 39 workspaces with files_dir
All 39 walk paths emit non-empty file sets with valid workspace markers.
REGRESSION GUARD
Without the EvalSymlinks fix, this test fails on Dev Lead with:
files_dir 'dev-lead' at '/.../molecule-dev/dev-lead' is empty —
CopyTemplateToContainer would produce empty /configs/
Refs:
internal#77 — extraction RFC
molecule-core#102 (resolver symlink contract test)
molecule-core#103 (stage-1 e2e: include resolution)
Hongming GO 2026-05-08 ('go' on the 3 pre-spin-up optimizations)
Phase 4 (local-only) of internal#77 (dev-department extraction).
Adds TestLocalE2E_DevDepartmentExtraction that exercises the FULL platform
import path against the real molecule-ai-org-template-molecule-dev (post-slim)
and molecule-ai/molecule-dev-department (post-atomize) repos cloned as siblings
under /tmp/local-e2e-deploy/.
What it proves end-to-end:
- The dev-lead symlink at parent's template root is followed by
resolveYAMLIncludes (filepath.Abs/Rel-style security check passes,
os.ReadFile follows the link).
- Recursive !include chain through the symlinked subtree resolves:
parent's org.yaml → !include dev-lead/workspace.yaml (symlinked)
→ !include ./core-lead/workspace.yaml → !include ./core-be/workspace.yaml
(atomized children: paths, no '..').
- 39 workspaces enumerate after resolution: 5 PM-tree + 6 Marketing-tree
+ 28 dev-tree (Dev Lead + 5 sub-team leads + 18 leaf workspaces +
3 floaters + 1 triage-operator).
- Q1+Q2 placements verified by sentinel name check: 'Documentation
Specialist' is reachable (under app-lead via app-docs sub-team),
'Triage Operator' is reachable (direct child of Dev Lead).
Test skips with t.Skipf if the local-e2e fixture isn't present on the
host — won't block CI on hosts that haven't set it up. To set up locally:
TESTROOT=/tmp/local-e2e-deploy
mkdir -p $TESTROOT && cd $TESTROOT
git clone https://git.moleculesai.app/molecule-ai/molecule-ai-org-template-molecule-dev.git molecule-dev
git clone https://git.moleculesai.app/molecule-ai/molecule-dev-department.git
cd /Users/<you>/molecule-core/workspace-server
go test -v -run TestLocalE2E_DevDepartmentExtraction ./internal/handlers/
Verified locally 2026-05-08:
--- PASS: TestLocalE2E_DevDepartmentExtraction (0.01s)
total workspaces (recursive): 39
Refs:
internal#77 — extraction RFC
molecule-core PR #102 — symlink-resolution contract test
molecule-ai/molecule-dev-department PRs #1, #2, #3 (scaffold + extract + atomize)
molecule-ai/molecule-ai-org-template-molecule-dev PR #5 (parent slim + symlink wire)
Hongming GO 2026-05-08 ('lets not go for staging right now, we do local test first')
SOP Phase 4 (local) — task #226
Two new tests in workspace-server/internal/handlers/org_include_test.go:
- TestResolveYAMLIncludes_FollowsDirectorySymlink: parent template's
org.yaml `!include`s into a sibling-repo subtree via a relative
directory symlink. The resolver's filepath.Abs/Rel security check
operates on path strings (passes), and os.ReadFile follows the
symlink at OS layer (file content delivered). Recursive nested
`!include`s within the symlinked subtree resolve correctly because
filepath.Dir(absTarget) keeps the literal symlink path as currentDir.
- TestResolveYAMLIncludes_RejectsSymlinkEscapingRoot: companion test
pinning current behavior where a symlink target outside the parent
root is followed (resolveInsideRoot doesn't EvalSymlinks). Asserted
as 'should resolve' so future hardening (if filepath.EvalSymlinks
is added) flips the test red and forces a coordinated update to the
dev-department subtree-composition pattern.
Why now: internal#77 RFC (dev-department extraction) selects symlink-
based composition over a future platform-level external: ref. These
tests pin the contract before the operator-side symlink convention
gets shipped, so a refactor or hardening of the resolver can't
silently break the production org-import path.
No production code changes. Pure additive test coverage.
Refs: internal#77 (Phase 3b verification — task #223)
Class B Hongming-owned CICD red sweep, e2e-api leg. Same substrate
hazard as PR #98 (handlers-postgres-integration) — Gitea act_runner
configures `container.network: host` operator-wide, so:
* Two concurrent e2e-api runs both attempted to bind `-p 15432:5432`
and `-p 16379:6379` on the operator host. Verified in run a7/2727
on 2026-05-07: `docker: Error response from daemon: Conflict. The
container name "/molecule-ci-redis" is already in use by container
af10f438...` — exit 125, job fails before any test runs.
* Hardcoded container names `molecule-ci-postgres` / `-redis` plus
the leading `docker rm -f` step meant a second job's startup also
KILLED the first job's still-running services.
Fix shape (mirrors PR #98 bridge-net pattern, adapted because the
platform-server is a Go binary on the host, not a containerised step):
1. Per-run unique container names: `pg-e2e-api-${RUN_ID}-${RUN_ATTEMPT}`,
`redis-e2e-api-${RUN_ID}-${RUN_ATTEMPT}`. Unique even across reruns
of the same run_id.
2. Ephemeral host port per run via `-p 0:5432` / `-p 0:6379` and
`docker port` lookup, exported as `DATABASE_URL` / `REDIS_URL` to
`$GITHUB_ENV`. No fixed host-port → no collision.
3. `127.0.0.1` (NOT `localhost`) in URLs — IPv6 first-resolve flake
fixed in #92 stays fixed.
4. `if: always()` cleanup so containers don't leak when test steps
fail.
Issue #94 items #2 + #3 also addressed:
* Pre-pull `alpine:latest` (provisioner uses it for ephemeral
token-write containers in `internal/handlers/container_files.go`).
* Idempotent `docker network create molecule-monorepo-net` (the
provisioner attaches workspace containers via that bridge —
`internal/provisioner/provisioner.go::DefaultNetwork`).
Issue #94 item #1 (timeouts) NOT bumped — recent log evidence shows
postgres ready in 3s, redis in 1s, platform in 1s when they DO come
up. Timeouts are not the bottleneck on the current substrate.
NOT addressed here (out of scope, separate change required):
* `Run E2E API tests` step has been failing on `Status back online`
because the platform's langgraph workspace template image
(`ghcr.io/molecule-ai/workspace-template-langgraph:latest`)
returns 403 Forbidden post-2026-05-06 GitHub org suspension. That
is a template-registry resolution issue (ADR-002 / local-build
mode) and belongs in a workspace-server change, not this workflow
file. This PR fixes the parallel-collision class and the workflow
setup hygiene; the langgraph-403 failure will still surface on
runs after this lands until template resolution is fixed
separately.
Verified manually on operator host 2026-05-08: docker now hands out
ephemeral ports on `-p 0:5432`, two parallel runs land on different
ports, both reach pg_isready GREEN.
Closes#94 (items #2 and #3; item #1 documented as not-bottleneck;
langgraph-template-403 referenced for follow-up).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Switches from services: block to --network molecule-monorepo-net with unique per-run container names. Avoids port-5432 collision when parallel Handlers-Postgres jobs run on host-network act_runner. Approved by security-auditor.
Class B verification — second consecutive green run to demonstrate the
fix isn't flaky.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Class B Hongming-owned CICD red sweep. The Handlers Postgres Integration
workflow has been silently failing on staging push and PRs ever since
#92 fixed the IPv6 flake — the IPv6 fix correctly pinned 127.0.0.1, but
unmasked a deeper issue: with our act_runner global container.network=host
config, multiple concurrent runs of this workflow each tried to bind
0.0.0.0:5432 on the operator host. The first wins; subsequent postgres
service containers exit with `FATAL: could not create any TCP/IP sockets`
+ `Address in use`. Docker auto-removes them (act_runner sets
AutoRemove:true), so by the time `Apply migrations` runs `psql`, the
container is gone — Connection refused, then `failed to remove container:
No such container` at cleanup time.
Per-job container.network override is silently ignored by act_runner
(`--network and --net in the options will be ignored.`), so we sidestep
`services:` entirely. The job container still uses host-net (required
for cache server discovery on the operator's bridge IP). We launch a
sibling postgres on the existing molecule-monorepo-net bridge with a
unique name per run (run_id+run_attempt) and connect via the bridge IP
read from `docker inspect`.
Verified manually on operator host 2026-05-08: 2× postgres on host-net
collides, but on the bridge with unique names + different IPs, both
succeed and each is reachable from a host-net job container.
Adds:
- always()-cleanup step so containers don't leak on test failure
- Diagnostic dump now includes the postgres container's docker logs
- Runbook at docs/runbooks/ documenting the substrate behavior + the
pattern future workflows should adopt for any `services:`-shaped need.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Class A red sweep — 3 first-tests timing out at the 5000ms default on the
self-hosted Gitea Actions Docker runner across 4 unrelated PRs (#82, #81,
#54, #53). The PRs share zero canvas/ surface — same 3 tests, same
cold-start signature, same shape on every run.
Root cause: `npx vitest run --coverage` cold-start cost (v8 coverage
instrumentation init + JSDOM bootstrap + heavy @/components/* and @/lib/*
import + first React render) consumes 5-7 seconds for the first
synchronous test in a heavyweight test file. Empirically:
ActivityTab "renders all 7 filter options" 5230ms (FAIL)
CreateWorkspaceDialog "opens the dialog ..." 6453ms (FAIL)
ConfigTab.provider "PUTs the new provider on Save" 5605ms (FAIL)
vs subsequent tests in the same files at 100-1500ms each. The component
code is correct (e.g. ActivityTab.FILTERS has 7 entries matching the
test). 1407 tests pass locally with --coverage in 9-15s; CI runs at 200s
under the same flag — the gap is import/transform/environment overhead,
not test logic.
Fix: CI-conditional `testTimeout: process.env.CI ? 30000 : 5000` in
canvas/vitest.config.ts. Local-dev sensitivity to genuine waitFor races
preserved; CI gets ~5x headroom over the worst observed first-test
(6453ms). Same shape Vitest documents at
<https://vitest.dev/config/testtimeout> and
<https://vitest.dev/guide/coverage#profiling-test-performance>.
Verification:
- Local: 5x runs of the 3 failing test files, all 74 tests green
(process.env.CI unset → 5000ms applies).
- Local: 7s sleep probe FAILS at 5000ms default and PASSES under
CI=true → ternary takes effect as written.
- Local: full canvas suite under CI=true with --coverage:
"Test Files 98 passed (98) | Tests 1407 passed (1407)".
Closes#96.
Refs: #82, #81, #54, #53.
Hostile self-review (3 weakest spots):
1. 30000ms is a guess, not a measurement. Mitigation: vitest still
emits per-test duration; a real 25s+ test will surface as a
duration regression and we dial down.
2. Doesn't fix the Docker-runner-overhead root-root-cause. True. That
is a multi-week perf project. The right trade today is unblocking 4
PRs from this single class.
3. Local-default of 5000ms means a real 8s race that flies on CI's
30000ms could pass without local sensitivity. Mitigation: dev-time
waitFor races are caught at the per-test level; suite-level cold-
start is the only legitimate >5s case here.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Conflicted files in .github/workflows/ taken from main:
.github/workflows/ci.yml
.github/workflows/e2e-staging-canvas.yml
.github/workflows/retarget-main-to-staging.yml
Conflicts arose from main advancing through PR #66/#79/#89 (CI workflow rewrites)
while staging hadn't picked up the changes yet. Main is the source of truth for
CI workflows; staging is downstream.
Co-authored-by: Claude (orchestrator)
Closes#88. Bundles localhost→127.0.0.1 + 2 other Gitea-act_runner flakes per feedback_gitea_actions_migration_audit_pattern. Approved by security-auditor.
The previous configs:-based fix (87b971a2) didn't actually fix the DinD
issue — Compose v2 falls back to bind mounts for `configs:` when swarm
mode is not active, so the resulting runc invocation still tries to
mount /workspace/.../cf-proxy/nginx.conf from the OUTER host filesystem
that the act_runner-vs-host-docker socket-mount can't see. Same
"not a directory" error returned.
Switch to a thin Dockerfile (cf-proxy/Dockerfile) that COPYs nginx.conf
into nginx:1.27-alpine. The build context is uploaded to the daemon as
a tarball, not bind-mounted from the host filesystem, so the path
translation gap doesn't apply. Verified locally: `docker build` +
`docker run cf-proxy nginx -T` reproduces the baked config end-to-end.
Trade-off: ~2-3s build cost on every harness up. Acceptable for the
Gitea CI gate; local-dev re-builds the image only when nginx.conf
changes (Docker layer cache).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Three workflows have been failing on every push to this Gitea repo for
GitHub-shaped reasons that don't translate to act_runner. Surfaced
while landing #84; bundled per `feedback_gitea_actions_migration_audit_pattern`
("bundle per-repo, not per-finding") instead of three separate PRs.
1) handlers-postgres-integration: localhost → 127.0.0.1
- lib/pq tries to dial localhost → ::1 first; the postgres service
container only listens on IPv4 → ECONNREFUSED → all
TestIntegration_* fail. Pin IPv4 to make the job deterministic.
2) pr-guards / disable-auto-merge-on-push: Gitea no-op
- The previous reusable-workflow caller invoked `gh pr merge
--disable-auto`, which calls GitHub's GraphQL API. Gitea returns
HTTP 405 on /api/graphql → step always fails. Inline the step so
it can detect Gitea (GITEA_ACTIONS=true OR repo url under
moleculesai.app) and no-op with a notice. Auto-merge gating is
moot on Gitea anyway: there's no `--auto` primitive being
touched. Job stays ALWAYS-RUN so branch protection's required
check still lands SUCCESS (avoids the SKIPPED-in-set trap from
`feedback_branch_protection_check_name_parity`).
3) Harness Replays: cf-proxy nginx.conf via docker `configs:` (not bind)
- act_runner runs the workflow inside a runner container; runc in
the docker daemon below resolves bind-mount source paths on the
OUTER host, not inside the runner. The path
`/workspace/.../cf-proxy/nginx.conf` is invisible there → "not a
directory" runc error. Switching to compose `configs:` packages
the file as content rather than a host bind, sidestepping the
DinD path-translation gap.
Local validation:
- YAML parsed clean for all 3 files.
- cf-proxy nginx.conf: standalone `docker compose run cf-proxy
nginx -T` reproduced the configs: mount end-to-end and dumped the
config correctly. The full harness compose still renders via
`docker compose config`.
Real-CI verification will land on this branch's first push.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The `HongmingWang-Rabbit/hermes-agent` fork is no longer reachable on
github.com (account suspended 2026-05-06). The patched fork now lives
at https://git.moleculesai.app/molecule-ai/hermes-agent. Same SHAs,
same branches — pure URL flip.
See molecule-ai/internal#72 for the github.com fork shell decision.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
actions/upload-artifact@v4+ and download-artifact@v4+ use the GHES 3.10+
artifact protocol that Gitea Actions (act_runner v0.6 / Gitea 1.22.x)
does NOT implement. Failure cite from PR #54 run 1325 jobs/2:
::error::@actions/artifact v2.0.0+, upload-artifact@v4+ and
download-artifact@v4+ are not currently supported on GHES.
Pinned all 3 references to v3.2.2 (latest v3) at SHA-pinned form for
supply-chain hygiene, matching the existing `uses:` style in this repo.
Affected workflows:
- ci.yml (Canvas Next.js coverage upload, blocks `CI / Canvas (Next.js)`
required check on every PR — was the merge-queue blocker for #53,
#54, #69, #71, #76, #81)
- e2e-staging-canvas.yml (Playwright report + screenshots on failure)
No download-artifact callers in the repo, so v3-pin doesn't compose-break
anywhere. Drop these pins post-Gitea-1.23+ when the v4 artifact protocol
ships, or migrate to a Gitea-native action.
Closes#210.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Class F of #75 sweep. /commits/{sha}/statuses replaces unavailable workflow-runs API. 4 mapping buckets verified against synthetic+real Gitea data. Approved by security-auditor.
Closes the chronic -race flake on TestPooledWithEICTunnel_PanicPoisonsEntry
and the handlers package as a whole (CI / Platform (Go) was intermittent
on staging, ~50% red on workspace-server-touching commits since 2026-04).
The race: tests swap the package-level poolJanitorInterval via t.Cleanup
(eic_tunnel_pool_test.go:61) AFTER an earlier test caused the global pool's
janitor goroutine to start. The janitor loops on time.NewTicker(poolJanitorInterval)
on every tick — so the cleanup write races the goroutine read for the rest
of the process. Caught locally + on PR #84's CI run on Gitea.
Fix: capture the interval as a field on eicTunnelPool at newEICTunnelPool().
The janitor now reads p.janitorInterval, which never changes after construction.
Tests that override poolJanitorInterval before freshPool() still get the new
value (they set the package var before construction). The global pool's
janitor — created lazily once via sync.Once on first getEICTunnelPool() —
is now immune to t.Cleanup-driven swaps from later tests.
Surfaced while verifying #84 (SaaS plugin install via EIC SSH); folded
into this PR per the "fix root not symptom" rule rather than merging
around a chronic-red CI signal.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two artifacts that unblock the parked follow-ups from #59:
1. scripts/edge-429-probe.sh (closes the "operator-blocked" status of
#62). An operator without CF/Vercel dashboard access can reproduce
a canvas-sized burst against a tenant subdomain and read each 429's
response shape — workspace-server bucket overflow (JSON body +
X-RateLimit-* headers) is distinguishable from CF (cf-ray) and
Vercel (x-vercel-id) by inspection of the report. Read-only,
parallel via background subshells (no GNU parallel dependency),
no credential use. Smoke-tested against example.com end-to-end.
2. docs/engineering/ratelimit-observability.md (closes the
"metric-blocked" status of #64). The existing
molecule_http_requests_total{path,status} counter + X-RateLimit-*
response headers already cover #64's acceptance criterion ("watch
metrics for two weeks"). The runbook collects the PromQL queries,
a decision tree for the re-tune (keep / per-tenant override /
change default), an alert rule template, and a hard "do not roll
ad-hoc per-bucket-key exposure" note (in-memory map includes
SHA-256 of bearer tokens — exposing it is a security review
surface, file a follow-up if needed).
Neither artifact changes runtime behaviour. Pure operational tooling.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the 🔴 docker-only row in docs/architecture/backends.md. Plugin
install on every SaaS tenant currently 503s with "workspace container
not running" because the handler is hardcoded to Docker exec but SaaS
workspaces live on per-workspace EC2s. Caught on hongming.moleculesai.app
when canvas POST /workspaces/<id>/plugins surfaced the error.
Mirrors the Files API PR #1702 pattern: dispatch on workspaces.instance_id
in deliverToContainer (and Uninstall). When set, push the staged plugin
tarball to the EC2 over the existing withEICTunnel primitive
(template_files_eic.go) and unpack into the runtime's bind-mounted config
dir (/configs for claude-code, /home/ubuntu/.hermes for hermes — see
workspaceFilePathPrefix). chown 1000:1000 to match the docker path's
agent-uid contract; restart via the existing dispatcher.
Direct host write rather than docker-cp via SSH because the runtime's
config dir is already bind-mounted into the workspace container — the
runtime sees the files on next start with no additional plumbing.
Adds InstanceIDLookup (parallel to RuntimeLookup) so unit tests don't
need a DB; production wires it in router.go like templates.go does.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Empty commit to trigger CI a second consecutive time per the SOP
'verify ≥1 representative workflow per class via workflow_dispatch
or push event ... ≥2 consecutive successful runs per class'.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Part of the post-#66 sweep to remove `gh` CLI dependencies that fail
silently against Gitea. Class F covers `gh run list --workflow=X
--commit=SHA` shapes — querying whether a specific workflow ran (and
how it finished) for a specific SHA.
Why this is the only call site in class F:
`gh run list` hits GitHub's `/repos/.../actions/runs` REST endpoint.
Gitea exposes ZERO endpoints under `/repos/.../actions/runs` —
verified 2026-05-07 via swagger inspection: only secrets, variables,
and runner-registration tokens live under /actions/. There's no way
to query workflow run state via the Gitea v1 API directly.
However, every Gitea Actions job DOES emit a commit status with
`context = "<Workflow Name> / <Job Name> (<event>)"` (verified
2026-05-07 by reading /repos/.../commits/{sha}/statuses on a recent
main SHA). That surface is exactly what we need: each workflow run
leg is one status row, the aggregate state encodes the run outcome,
and Gitea exposes it under `/api/v1/repos/.../commits/{sha}/statuses`
which IS available.
Affected:
`auto-promote-on-e2e.yml` (lines 172-180):
Old: `gh run list --workflow e2e-staging-saas.yml --commit $SHA
--json status,conclusion --jq ...` returning a 5-bucket string
like `completed/success` | `in_progress/none` | `none/none` |
`completed/failure` | `completed/cancelled`.
New: `curl /api/v1/repos/.../commits/$SHA/statuses` + jq filter on
contexts whose name starts with
`"E2E Staging SaaS (full lifecycle) /"`. Mapping:
0 matched contexts → "none/none" (E2E paths-
filtered out
— same as
before)
any context = pending → "in_progress/none" (defer)
any context = error|failure → "completed/failure" (abort)
all contexts = success → "completed/success" (proceed)
The `completed/cancelled` arm of the case statement becomes
unreachable: Gitea status API doesn't expose a `cancelled` state
(it has success/failure/error/pending/warning), so per-SHA
concurrency cancellations now surface as `failure` and are handled
by the failure branch. Documented in-place; the cancelled arm is
kept as defense-in-depth for any future dual-host operation.
Verification:
- Live curl against the current main SHA returns `none/none` (E2E
was paths-filtered for that change set — expected).
- Synthetic-input jq tests verify all four mapping buckets:
no contexts → "none/none"
one context = pending → "in_progress/none"
success + success → "completed/success"
success + failure → "completed/failure"
- YAML syntax validates.
Token: continues to use act_runner's GITHUB_TOKEN (per-run, repo
read scope). The `/commits/{sha}/statuses` endpoint is repo-scoped,
no extra perms needed.
Closes part of #75. Master tracking issue at #75; companion PRs:
#80 (class A — `gh pr ...`), #81 (class D — `gh api ...`).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Hostile-self-review weakest-spot #2: if the devops-engineer persona
is ever renamed, the canary will go red even if everything else is
fine. Add an inline comment pointing the next editor at both files
that must update together (auto-sync-main-to-staging.yml's git
config + this canary's EXPECTED_PERSONA + the staging branch
protection's push_whitelist_usernames).
No behaviour change — comment-only.
While verifying Phase 4, found a real flaw in Probe 3 (`git ls-remote
refs/heads/staging`). On a public repo (which molecule-core is), Gitea
falls back to anonymous read on bad auth, so `ls-remote` succeeds even
with a junk token. The probe was therefore green-lighting rotated
tokens — false-green, the worst possible canary failure mode.
Rewritten to use `git push --dry-run` of the current staging SHA back
to `refs/heads/staging`:
- Push always authenticates (auth-gated on smart-protocol handshake,
before the dry-run can compute the empty-diff).
- NOP by construction: pushing the current tip back to itself is
"Everything up-to-date" with exit 0.
- Bad token → "Authentication failed", exit 128.
- Doesn't reach pre-receive (where branch-protection authz runs), so
scope is "auth only" — matches the design intent (failure mode B);
authz already covered daily by branch-protection-drift.yml.
Implementation note: `git push` requires a local repo. Spinning up a
fresh `git init` in a tempdir (~1KB, ~50ms) instead of pulling the
full repo via actions/checkout — actions/checkout would clone
~hundreds of MB for what amounts to "a place to run git from."
Local mutation tests pass:
- Real token: "Everything up-to-date" exit 0
- Junk token: "Authentication failed" exit 128 with actionable
::error:: messages pointing at the runbook
Header comment + runbook step-mapping updated to reflect new probe
shape. Refs: #72
Part of the post-#66 sweep to remove `gh` CLI dependencies that fail
silently against Gitea (which exposes /api/v1 only — no GraphQL → 405,
no /api/v3 → 404). Class A covers `gh pr list / view / diff / comment`
shapes.
Affected:
- `.github/workflows/auto-tag-runtime.yml`
Replaced `gh pr list --search SHA --json number,labels` with a curl
to `/api/v1/repos/.../pulls?state=closed&sort=newest&limit=50` +
jq filter on `merge_commit_sha == github.sha`. Same end-to-end
behaviour: locate the merged PR for this push, read its labels,
pick the bump kind. Defensive `?.name // empty` jq guard handles
unlabelled PRs without erroring. The 50-PR window is comfortably
larger than the volume of staging→main promotes that close in any
reasonable detection window.
- `scripts/check-stale-promote-pr.sh`
Rewrote `fetch_prs` and `post_comment` to call Gitea's REST API
directly. Gitea doesn't expose GitHub's compound `mergeStateStatus`
/ `reviewDecision` fields, so the new fetcher pulls
`/pulls?state=open&base=main` then for each PR pulls
`/pulls/{n}/reviews` and synthesizes the GitHub-shape JSON the rest
of the script (and the existing fixture-based unit tests) consume:
BLOCKED + REVIEW_REQUIRED ↔ mergeable=true AND 0 APPROVED reviews
DIRTY ↔ mergeable=false (alarm doesn't fire)
CLEAN + APPROVED ↔ mergeable=true AND ≥1 APPROVED review
Comment-posting moves to `POST /repos/.../issues/{n}/comments`
(Gitea treats PRs as issues for the comment surface, same as
GitHub's REST). All 23 fixture-driven unit tests still pass —
fixtures pass GitHub-shape JSON via PR_FIXTURE which short-circuits
the live fetch path.
- `scripts/ops/check_migration_collisions.py`
Replaced `gh pr list` + `gh pr diff` calls with stdlib `urllib`
against /api/v1. Helper `_gitea_get` centralizes auth + error
handling; uses GITEA_TOKEN env, falling back to GITHUB_TOKEN
(act_runner) and GH_TOKEN. Return shape from
`open_prs_with_migration_prefix` mimics the historical
`--json number,headRefName` so the call sites are unchanged. All 9
regex-classifier unit tests still pass; live integration test
against the production Gitea API returns 0 collisions for prefix=999
as expected.
curl invocation pattern is `curl --fail-with-body -sS` (NOT `-fsS` —
the two short-fail flags are mutually exclusive in modern curl;
caught by `curl: You must select either --fail or --fail-with-body,
not both` during local verification).
Token model: workflows pass act_runner's GITHUB_TOKEN (per-run, repo
read scope) — same surface used by the auto-sync fix in PR #66 plus
the surrounding workflows. No new repo secrets required.
Verification: bash unit tests (23/23 pass), python unittest (9/9 pass),
live curl call against production Gitea returns 200 with the expected
shape, YAML / shell / Python syntax all validate.
Closes part of #75. Other classes (D — `gh api`; F — `gh run list`)
land in follow-up PRs.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Root cause: same as #65/#73 — gh CLI calls Gitea GraphQL
(/api/graphql) which returns HTTP 405. Specifically:
- gh api -X PATCH /pulls/{N} sometimes works but is flaky on
Gitea (depends on gh's host-resolution layer)
- gh pr close / gh pr comment route through GraphQL → 405
Fix: replace all gh calls with direct curl REST calls to Gitea:
- PATCH /api/v1/repos/{owner}/{repo}/pulls/{index} body
{"base": "staging"} — retarget the PR base
- POST /api/v1/repos/{owner}/{repo}/issues/{index}/comments —
post the explainer comment (PRs are issues in Gitea, comments
share the issue endpoint)
- PATCH /api/v1/repos/{owner}/{repo}/pulls/{index} body
{"state": "closed"} — close redundant PR for #1884 case
Identity: switch from secrets.GITHUB_TOKEN (per-job ephemeral,
narrow scope on Gitea) to secrets.AUTO_SYNC_TOKEN (devops-engineer
persona). Same persona used by auto-sync (#66) and auto-promote
(#78). Per feedback_per_agent_gitea_identity_default. PR-edit and
comment do not need branch-protection bypass.
Curl-status-capture pattern hardened per
feedback_curl_status_capture_pollution: http_code via -w to its
own scalar, body to a tempfile, set +e/-e bracket so curl's
non-zero-on-4xx doesn't pollute the script's exit chain.
Header comment block fully rewritten with 4 failure-mode runbooks
(A: 422 dup-base, B: token rotated, C: PR deleted, D: filter
mis-fire) per PR #66/#78's pattern.
Refs: #65, #74, #196, PR #66 + #78 (canonical reference)
Closes#74
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The two API probes used the unsafe shape rejected by
lint-curl-status-capture.yml (per feedback_curl_status_capture_pollution):
status=$(curl ... -w '%{http_code}' ... || echo "000")
When curl exits non-zero (transport error, --fail-with-body 4xx/5xx),
the `-w` already wrote a code; the `|| echo "000"` then APPENDS another
"000", yielding "000000" or "409000" — passes shape checks while looking
right.
Switch to the canonical safe shape (set +e + tempfile + cat):
set +e
curl ... -w '%{http_code}' >code_file 2>/dev/null
set -e
status=$(cat code_file 2>/dev/null || true)
[ -z "$status" ] && status="000"
Inline comment in both probe steps explains the lint constraint so
the next editor doesn't re-introduce the bad pattern.
Refs: #72, lint failure on PR #77 (1/22 red → 22/22 expected green)
Root cause: same as #65/PR-#66 — gh CLI calls Gitea GraphQL
(/api/graphql) which returns HTTP 405. Additionally, gh workflow
run calls /actions/workflows/{id}/dispatches which does not
exist on Gitea 1.22.6 (verified via swagger.v1.json).
Fix:
- Replace gh run list with Gitea REST combined-status endpoint
(GET /repos/{owner}/{repo}/commits/{ref}/status). Combined state
encodes the AND across every check context — simpler than the
per-workflow loop and immune to workflow-name collisions.
- Replace gh pr create / merge --auto with direct curl calls to
POST /pulls and POST /pulls/{N}/merge with merge_when_checks_succeed.
- Remove the post-merge polling tail entirely. The GitHub-era
GITHUB_TOKEN no-recursion rule does not apply on Gitea Actions
(verified empirically: PR #66 merge fired downstream pushes
naturally). Even if we wanted to dispatch, Gitea has no
workflow_dispatch REST endpoint.
Critical constraint: main has enable_push: false with no whitelist;
direct push is impossible for any persona. PR-mediated merge is the
only path. main has required_approvals: 1 — auto-merge waits for
Hongming's approval before landing, preserving the
feedback_prod_apply_needs_hongming_chat_go contract.
Identity: AUTO_SYNC_TOKEN (devops-engineer persona). Not founder PAT.
Per feedback_per_agent_gitea_identity_default. Same persona used by
auto-sync (PR #66) — keeps identity model coherent.
Header comment block fully rewritten with 4 failure-mode runbooks
(A: gates not green, B: PR-create non-201, C: merge schedule fails,
D: token rotated/scope wrong) per PR #66's pattern.
Refs: #65, #73, #195, PR #66 (canonical reference)
Closes#73
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds a 6h-cron synthetic check that fires the auth surface used by
auto-sync-main-to-staging.yml (PR #66) and emits a red workflow
status when AUTO_SYNC_TOKEN has drifted out of validity. Closes
hostile-self-review weakest-spot #3 from PR #66 (token-rotation
detection latency).
Read-only verification — no writes, no synthetic merge commits, no
canary branch noise. Three probes:
1. GET /api/v1/user → token authenticates as devops-engineer
2. GET /api/v1/repos/molecule-ai/molecule-core → read:repository scope
3. git ls-remote refs/heads/staging → exact HTTPS auth path used by
actions/checkout in the real auto-sync workflow
Hard-fail on missing AUTO_SYNC_TOKEN secret on both schedule and
workflow_dispatch — per feedback_schedule_vs_dispatch_secrets_hardening,
a silent soft-skip would make the canary itself drift-invisible (the
sweep-cf-orphans #2088 lesson). Operator runbook in workflow header.
Token reuse: same AUTO_SYNC_TOKEN as the workflow under monitor; no
new credential introduced. Read-only paths only.
Refs: #72, hostile-self-review #66
Stage 3 of #61 (final stage). Replaces the 5s setInterval poll with:
1. Initial bootstrap on mount + on filter-change + on workspaceId-
change (preserved from existing useEffect on loadActivities).
2. Manual Refresh button (preserved — still triggers loadActivities).
3. useSocketEvent subscription to ACTIVITY_LOGGED — every event
for THIS workspace prepends to the list, gated on the user's
autoRefresh toggle and current filter selection.
No interval poll. Steady-state HTTP traffic from this tab drops from
12 req/min (5s × 1 active workspace) to 0 outside of bootstraps and
manual refreshes. Live update latency drops from up to 5s to ~10ms.
The autoRefresh ("Live" / "Paused") toggle now gates LIVE updates
instead of polling cadence — semantically the same (paused = list
stays frozen), implementationally simpler.
The filter selection is honoured by the WS handler so a user
filtering to "Tasks" doesn't see live a2a_send rows trickle in. Same
shape the server-side `?type=<filter>` enforces on the bootstrap.
Test changes:
- 27 existing tests pass unchanged (filter / autoRefresh /
Refresh / loading / error / empty / count / row-content all
preserved)
- 7 new WS-subscription tests:
- WS push for matching workspace prepends with NO HTTP call
- WS push for different workspace ignored
- WS push respects active filter (non-matching ignored)
- WS push respects active filter (matching renders)
- WS push while autoRefresh paused ignored
- WS push for already-in-list row deduped (no double-render)
- NO 5s interval polling after mount
Mutation-tested:
- drop workspace_id filter → "different workspace" test fails
- drop autoRefresh gate → "paused" test fails
- drop filter gate → "non-matching activity_type" test fails
- drop dedup-by-id → "already in list deduped" test fails
Full canvas suite: 1396 passing, 0 failing. tsc clean.
No API or schema change. /workspaces/:id/activity HTTP endpoint
stays — used for bootstrap + manual refresh + filter-change reload.
ACTIVITY_LOGGED event shape unchanged.
Hostile self-review (three weakest spots):
1. Server-side activity_logs row UPDATES (status flips, etc.) are
not reflected post-#61 — the dedup-by-id check skips a re-fired
ACTIVITY_LOGGED for an existing row. Acceptable: activity_logs
is append-only by design (audit trail); status updates surface
as new task_update rows, not as in-place mutations. If a future
server change adds in-place updates, fire ACTIVITY_UPDATED as a
distinct event so this dedup logic stays simple.
2. WS handler is recreated on every render (filter / autoRefresh /
workspaceId state changes). useSocketEvent's ref-based pattern
keeps the bus subscription stable, but the handler closure
re-captures each render. Side effect: fine — handler call cost
is negligible.
3. The "error" filter matches activity_type === "error" (mirrors
server semantics). It does NOT match status === "error" rows
of other activity types — same as the polling version. Worth
re-evaluating in a separate PR if users expect the broader
semantic.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Stage 2 of #61. Replaces the 60s setInterval poll that fanned out
across every visible workspace fetching `?type=delegation&limit=500`
with:
1. One bootstrap fan-out on mount (or on visible-ID-set change),
same shape as before — preserves the 60-min look-back history.
2. useSocketEvent subscription to ACTIVITY_LOGGED — every event
with activity_type=delegation + method=delegate from a visible
workspace appends to a local rolling buffer, edges are re-derived
via the existing buildA2AEdges helper.
3. showA2AEdges toggle off: clears edges + buffer.
No interval poll. The visibleIdsKey selector gate that fixed the
2026-05-04 render-loop incident is preserved — peer-discovery /
status-flip writes still don't trigger a wasteful re-bootstrap.
Steady-state HTTP traffic from this overlay drops from N req/min
(N visible workspaces × 1 cycle/min) to 0 outside of mount + visible-
ID-set-change bootstraps. Live update latency drops from up to 60s
to ~10ms.
Bootstrap race-aware: any WS arrivals that landed in the buffer
during the fetch await are preserved by id-dedup-with-fetched-first
ordering. No row is double-counted; no row is lost during in-flight
updates.
Test changes:
- 27 existing tests pass unchanged (buildA2AEdges purity preserved,
component visibility/visibleIdsKey/error-swallow behaviour
preserved).
- 6 new WS-subscription tests:
- NO 60s polling after bootstrap (clock advance fires nothing)
- WS push for delegation updates edges with NO HTTP call
- WS push for non-delegation activity_type ignored
- WS push for delegate_result ignored (mirrors buildA2AEdges
method filter)
- WS push from hidden workspace ignored
- WS push while showA2AEdges=false ignored
Mutation-tested:
- drop activity_type filter → "non-delegation" test fails
- drop method===delegate filter → "delegate_result" test fails
- drop visible-ws membership filter → "hidden workspace" test fails
Full canvas suite: 1395 passing, 0 failing. tsc clean.
No API or schema change. ACTIVITY_LOGGED event shape unchanged.
The /workspaces/:id/activity HTTP endpoint stays — used for bootstrap.
Hostile self-review (three weakest spots):
1. Bootstrap fetches up to 500 rows × N workspaces. Worst-case
buffer ~3000 entries before window-prune. Acceptable: window-
prune runs on every recomputeAndPush, buildA2AEdges aggregates
to at most N² edges. Real-world usage stays well under both.
2. WS handler re-arms on every bootstrap dependency change
(visibleIds change). useSocketEvent's ref-based pattern means
the bus subscription stays stable across renders, but the
handler closure re-captures bootstrap each time. Side effect:
fine — handler invocation just calls recomputeAndPush which is
idempotent.
3. delegate_result rows arriving over WS are silently dropped.
Acceptable: the existing buildA2AEdges already filters them out
at aggregation time (avoids double-counting); pre-filtering at
the WS handler is the correct mirror — keeps the bus path and
the bootstrap path consistent.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
OSS contributors who clone molecule-core and `go run ./workspace-server/cmd/server`
now get a working end-to-end provision without authenticating to GHCR or AWS ECR.
Pre-fix: with MOLECULE_IMAGE_REGISTRY unset, the provisioner attempted to pull
ghcr.io/molecule-ai/workspace-template-<runtime>:latest, which has been
returning 403 since the 2026-05-06 GitHub-org suspension.
Post-fix: when MOLECULE_IMAGE_REGISTRY is unset, the provisioner switches to
local-build mode — looks up the workspace-template-<runtime> repo's HEAD sha
on Gitea via a single API call, shallow-clones into ~/.cache/molecule/, and
runs `docker build --platform=linux/amd64`. SHA-pinned cache key skips the
clone+build entirely on subsequent provisions.
Production tenants are unaffected: every prod tenant sets the var to its
private ECR mirror, so the SaaS pull path is byte-for-byte identical.
SSOT for mode detection lives in Resolve() (registry_mode.go) returning a
discriminated RegistrySource{Mode, Prefix} so call sites that branch on
mode get a compile-time push instead of a string-equality footgun.
Coverage:
* registry_mode.go — new SSOT (Resolve, RegistryMode, IsKnownRuntime)
* registry_mode_test.go — 8 tests pinning mode-decision contract
* localbuild.go — clone+build pipeline (570 LOC, fully unit-tested)
* localbuild_test.go — 22 tests covering happy/sad paths, fail-closed
* provisioner.go — Start() inserts ensureLocalImageHook in local mode
* docs/adr/ADR-002 — design rationale + alternatives + security review
* docs/development/local-development.md — local-build flow + env overrides
Security:
* Allowlist-only runtime names (knownRuntimes) gate the clone path.
* Repo prefix hardcoded to git.moleculesai.app/molecule-ai/molecule-ai-workspace-template-;
forks via opt-in MOLECULE_LOCAL_TEMPLATE_REPO_PREFIX.
* MOLECULE_GITEA_TOKEN masked in every log line via maskTokenInURL/maskTokenInString.
* Fail-closed: Gitea unreachable / runtime not mirrored → clear error, never
silently fall back to GHCR/ECR.
* docker build invocation passes no --build-arg from external input.
* HTTP body cap 64KB on Gitea API responses (defence vs malicious upstream).
Closes#63 / Task #194.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Stage 1 of #61. Replaces the 30s setInterval poll with:
1. One bootstrap fan-out on mount (cap of 3 retained from the
2026-05-04 fix), gives the initial recent-comms window without
waiting for live events.
2. useSocketEvent subscription to ACTIVITY_LOGGED — every event
with a comm-overlay-relevant activity_type from a visible online
workspace prepends to the rendered list.
3. Re-bootstrap on visibility-toggle re-open so the snapshot is
fresh after a long collapsed period.
No interval poll. Inherits the singleton ReconnectingSocket's
reconnect / backoff / health-check guarantees via useSocketEvent.
Steady-state HTTP traffic from this overlay drops from ~6 req/min
(3 ws × 2 cycles/min) to 0 outside of mount/visibility-toggle
bootstraps. Live updates arrive within ~10ms of the server insert
instead of after up to 30s.
Test changes:
- Bootstrap fan-out cap of 3 — kept (was the cadence test's role
pre-#61)
- 30s cadence test — replaced with "no interval polling" test
that pins the absence of any cadence-driven HTTP after bootstrap
- Visibility gate test — extended to verify both: no fetches while
closed, AND re-bootstrap on re-open
- WS subscription tests (new):
- WS push extends rendered list with NO HTTP call
- WS push for offline workspace ignored
- WS push for non-comm activity_type ignored
- WS push while collapsed ignored
- non-ACTIVITY_LOGGED events ignored
Mutation-tested:
- drop visibility gate → visibility test fails
- drop activity_type filter → "non-comm activity_type" test fails
- drop workspace online-set filter → "offline workspace" test fails
Full canvas suite: 1393 passing, 0 failing. tsc clean.
No API or schema change. ACTIVITY_LOGGED event shape pinned by
existing socket-events tests.
Hostile self-review (three weakest spots):
1. Sustained WS outage shows stale comms until visibility-toggle
re-bootstrap. Acceptable: the singleton socket already auto-
reconnects and the comm overlay isn't a critical-path surface.
2. Bootstrap on visibility-toggle costs another 3 HTTP calls each
re-open. Acceptable: visibility-toggle is a deliberate user
action, not a tight loop.
3. The WS handler reads the latest `nodes` via nodesRef rather
than re-subscribing on node changes. By design — the bus
listener stays bound for the component lifetime to avoid the
"tear-down storm" pattern A2ATopologyOverlay's comment warns
about (ref-based current-state lookup, stable subscription).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Root cause of `Auto-sync main → staging / sync-staging (push)`
failing every push to main since the GitHub→Gitea migration:
The workflow assumed a GitHub `merge_queue` ruleset on staging
(blocking direct push) and used `gh pr create` + `gh pr merge
--auto` to land sync via the queue. On Gitea this fails at the
`gh pr create` step with `HTTP 405 Method Not Allowed
(https://git.moleculesai.app/api/graphql)` — Gitea exposes no
GraphQL endpoint, and the GitHub-CLI cannot ship PRs against
Gitea.
Verified failure mode in run 1117/job 0 (token logs at
/tmp/log2.txt, run target /molecule-ai/molecule-core/actions/
runs/1117/jobs/0). The merge step succeeded and pushed
auto-sync/main-1e1f4d63; the PR step failed with the 405. So
every main push left an orphan auto-sync/* branch and a red CI
status, with no PR to land it.
Fix: the staging branch protection on Gitea
(`enable_push: true`, `push_whitelist_usernames:
[devops-engineer]`) already permits direct push from the
devops-engineer persona. Drop the entire merge-queue PR
architecture and replace with:
1. Checkout staging with secrets.AUTO_SYNC_TOKEN
(devops-engineer persona token, NOT founder PAT —
`feedback_per_agent_gitea_identity_default`).
2. `git fetch origin main` + ff-merge or no-ff merge.
3. `git push origin staging` directly.
The AUTO_SYNC_TOKEN repo secret already exists (created
2026-05-07 14:00 alongside the staging push_whitelist update).
Workflow name + job name unchanged → required-check name
`Auto-sync main → staging / sync-staging (push)` keeps the
same context, no branch-protection edits needed.
Rejected alternatives (documented in workflow header):
- Reuse PR architecture via Gitea REST: ~80 LOC of API
plumbing for no benefit; direct push works.
- GH_HOST=git.moleculesai.app: still calls /api/graphql,
same 405; doesn't fix the root issue.
- Custom JS action: external dep for a 5-line `git push`.
Header comment in the workflow now documents:
- What this workflow does (SSOT for staging advancing).
- Why direct push (GitHub merge_queue → Gitea push_whitelist).
- Identity and token (anti-bot-ring per saved memory).
- Failure modes A–D with operator runbook for each.
- Loop safety (push to staging doesn't fire push:main → no
recursion).
Verification plan: this fix-PR's merge to main is itself the
trigger; watch the workflow run on the merge commit and on
one follow-up trigger commit, expect both green.
Refs: failing run https://git.moleculesai.app/molecule-ai/
molecule-core/actions/runs/1117/jobs/0
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The previous comment said "all share one IP bucket" — accurate before
the keyFor refactor, slightly stale after it. The dev-mode rationale
(bucket fills fast, blanks the page on a single-user dev box) is
unchanged; only the bucket-key flavour text needed updating.
Doc-only follow-up from #60's hostile self-review #3. No behavior change.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes#59.
Symptom: /workspaces/:id/activity returns 429 with rate-limit-exceeded
on hongming.moleculesai.app whenever multiple workspaces are visible
in the canvas. Single-tab, single-user, well within the documented
600 req/min budget — but every request collapsed into one bucket.
Root cause: workspace-server's RateLimiter keyed buckets on
c.ClientIP(). After issue #179 turned off proxy-header trust
(SetTrustedProxies(nil), correctly closing the XFF spoofing hole),
c.ClientIP() returns the TCP RemoteAddr — which in production is the
upstream proxy (Caddy on per-tenant EC2; CP/Vercel on the SaaS plane).
Every browser tab + every canvas consumer + every poll loop for every
tenant collapsed into one bucket.
Fix: bucket key derivation moves into a single keyFor helper that
mirrors the SSOT pattern of:
- molecule-controlplane/internal/middleware/ratelimit.go (org > user > IP)
- this package's own MCPRateLimiter (token-hash via tokenKey)
Priority: X-Molecule-Org-Id header → SHA-256(Authorization Bearer)
→ ClientIP. Token values are kept hashed in the bucket map so the
in-memory state can't become a token dump.
Tests:
- TestKeyFor_OrgIdHeaderTrumpsBearerAndIP — priority order
- TestKeyFor_BearerTokenWhenNoOrgId — middle tier + raw-token leak pin
- TestKeyFor_IPFallbackWhenNoOrgIdNoBearer — anon probe path
- TestRateLimit_TwoOrgsSameIP_IndependentBuckets — load-bearing
regression (issue #59) — two tenants behind same upstream proxy
must not share a bucket
- TestRateLimit_TwoTokensSameIP_IndependentBuckets — same shape
for the per-tenant Caddy box
- TestRateLimit_SameOrgDifferentTokens_SharedBucket — counter-pin:
rotating tokens within one org must NOT bypass the org's quota
- TestRateLimit_Middleware_RoutesThroughKeyFor — AST gate, mirrors
the SSOT gates established in #36/#10/#12
Mutation-tested:
- strip org-id branch in keyFor → 3 tests fail
- strip bearer-token branch → 2 tests fail
- reintroduce direct c.ClientIP() in Middleware → 3 tests fail
(including the AST gate)
Existing tests pass unchanged: dev-mode fail-open, X-RateLimit-*
headers (#105), Retry-After on 429 (#105), XFF anti-spoofing (#179).
No schema/API change. 429 response body and X-RateLimit-* headers
unchanged. RATE_LIMIT env var semantics unchanged.
Hostile self-review (three weakest spots) is in the issue body:
1. one-shot Docker-inspect cost is now bucket-key derivation cost
(string compare + SHA-256 of bearer); single-digit microseconds.
2. X-Molecule-Org-Id is unvalidated at the rate-limiter layer —
spoofing is closed by tenant SG + CP front; documented in
keyFor's docstring with the conditions under which to revisit.
3. cpProv-style SaaS surface is out of scope; CP's own limiter
handles that hop.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Audit finding: every workflow that emits a required-status-check name
on molecule-core's branch protection (apply.sh's STAGING_CHECKS +
MAIN_CHECKS) ALREADY uses the safe always-runs-with-conditional-steps
shape — Platform/Canvas/Python/Shellcheck in ci.yml, Canvas tabs E2E
in e2e-staging-canvas.yml, E2E API Smoke in e2e-api.yml, PR-built
wheel in runtime-prbuild-compat.yml, the codeql Analyze matrix, and
the always-on Secret scan + Detect changes. No production drift to
fix today.
Adds a regression-guard so the next path-filter / matrix refactor /
workflow rename can't silently re-introduce the bug shape called out
in saved memory feedback_branch_protection_check_name_parity:
"Path filters … silently break branch protection because no job
emits the protected sentinel status when path-filter returns false."
New tools:
- tools/branch-protection/check_name_parity.sh — extracts every
required check name from apply.sh's heredocs, then for each name
classifies the owning workflow as safe (no top-level paths:) /
safe (per-step if-gates without top-level paths:) / unsafe
(top-level paths: without per-step if-gates) / unsafe-mix
(top-level paths: WITH per-step if-gates — the workflow may still
skip entirely on path exclusion, leaving the gates dormant) /
missing (no emitter at all). Special-cases codeql.yml's matrix-
expanded `Analyze (${{ matrix.language }})`.
- tools/branch-protection/test_check_name_parity.sh — 6 unit tests
covering each classification: safe, unsafe-path-filter, missing,
safe-with-per-step-gates, unsafe-mix, matrix-expansion. Each test
builds a synthetic apply.sh + workflow file in a tmpdir, invokes
the script, and asserts on exit code + stderr substring. Per
feedback_assert_exact_not_substring the assertions pin specific
classifications, not just non-zero exit.
Wired into branch-protection-drift.yml so every PR touching
.github/workflows/** runs the parity check; the existing daily
schedule covers between-PR drift. The check is cheap (~1s) and runs
without the admin token — only reads files in the checkout. Self-
test step runs the unit tests on every invocation, so a regression
in the script can't false-pass on production.
Per BSD-vs-GNU portability hygiene: heredoc-marker extraction stays
in plain awk + sed (no gawk-only `match()` array form), grep regex
avoids `^` anchor for `if:` lines because real workflows use
` - if:` with the `-` step-marker between leading spaces and
`if:` (the original anchor missed every workflow's per-step gates).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the post-Task-#176 self-review gap: the bearer-token + tenant-
slug header construction was duplicated across 7 raw-fetch callsites
in the canvas (lib/api.ts request(), uploads.ts × 2, and 5 Attachment*
components). Each callsite read NEXT_PUBLIC_ADMIN_TOKEN, attached
Authorization: Bearer manually, computed getTenantSlug locally
(three of them inline-redefined it from /lib/tenant!), and attached
X-Molecule-Org-Slug. A new poller / raw-fetch added without going
through this exact recipe silently 401s against workspace-server when
ADMIN_TOKEN is set on the server side — the bug shape called out in
the original task.
Adds platformAuthHeaders() to lib/api.ts as the single source of truth
and routes all 7 raw-fetch callsites through it. Removes 4 duplicate
local getTenantSlug() copies (Image, Video, Audio, PDF, TextPreview)
that were inline-redefining what /lib/tenant.ts already exports.
Also preserves the AttachmentTextPreview off-platform branch — when
isPlatformAttachment() is false, headers is {} (no bearer leakage to
third-party URLs).
Tests:
- 6 unit tests in platform-auth-headers.test.ts covering: empty,
bearer-only, slug-only, both, empty-string-as-unset, fresh-object-
per-call. Mutation-tested: removing the bearer attach inside the
helper fails 2 of 6 tests immediately.
- All 1389 existing canvas vitest tests pass unchanged.
- npx tsc --noEmit clean.
- npm run build succeeds (canvas Next.js build).
Per feedback_assert_exact_not_substring: tests use exact toEqual()
equality, not substring/contains, so an extra-header bug also fails
the assertion. Per feedback_oss_design_philosophy: this is the
"plugin/abstract/modular/SSOT" move applied to the auth-header
construction surface — one helper, six call sites, no duplication.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Why
---
PR #35 marked `continue-on-error: true` at the JOB level (correct YAML),
but Gitea Actions 1.22.6 does NOT propagate job-level continue-on-error
to the commit-status API — every matrix leg still posts `failure`. That
keeps OVERALL=failure on every push to main + staging and blocks the
auto-promote signal even when every other gate is green.
Worse: the underlying CodeQL run never actually worked on Gitea. The
github/codeql-action/init@v4 step calls api.github.com bundle endpoints
(CLI download + query packs + telemetry) that Gitea does NOT proxy.
Confirmed via live-tested run 1d/3101 on operator host:
2026-05-07T20:55:17 ::group::Run Initialize CodeQL
with: languages: ${{ matrix.language }}
queries: security-extended
2026-05-07T20:55:36 ::error::404 page not found
2026-05-07T20:55:50 Failure - Main Initialize CodeQL
2026-05-07T20:55:51 skipping Perform CodeQL Analysis (main skipped)
2026-05-07T20:55:51 :⚠️:No files were found at sarif-results/go/
The SARIF artifact upload was already a no-op (warning above) — the
analyze step never wrote anything because init failed. So nothing of
value is being lost by stubbing this out.
What
----
- Convert the workflow to a single-step stub that emits success per
matrix language (go, javascript-typescript, python).
- Keep workflow `name: CodeQL` exactly (auto-promote-staging.yml
line 67 keys on it as a workflow_run gate).
- Keep job name template `Analyze (${{ matrix.language }})` and the
3-leg matrix exactly (commit-status context names + branch
protection + #144 required-check-name parity).
- Keep all four triggers (push / pull_request / merge_group /
schedule) so merge_group required-checks parity holds.
- Drop the codeql-action steps, the Autobuild step, the SARIF parse
step, and the upload-artifact step — all four of those are now
dead code (init can never succeed against Gitea's API surface).
Policy
------
Per Hongming decision 2026-05-07 (#156): CodeQL is ADVISORY, not
blocking, until a Gitea-compatible SAST pipeline lands. The header
of the new workflow file documents this decision + lists the three
re-enable options (self-hosted Semgrep, Sonatype, GitHub mirror)
plus the compensating controls in place (secret-scan, block-internal-
paths, lint-curl-status-capture, branch-protection-drift).
Closes#156. Touches #142 (no capital-M Molecule-AI refs in this
file — already lowercase per e01077be).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
harness-replays.yml builds tenant-alpha + tenant-beta via tests/harness/
compose.yml using workspace-server/Dockerfile.tenant. Post-#173, that
Dockerfile expects .tenant-bundle-deps/{workspace-configs-templates,
org-templates,plugins} pre-cloned at the build context root. Sister
PR #38 added the pre-clone step to publish-workspace-server-image.yml
but missed harness-replays.yml.
Symptoms:
- main run #892 (2026-05-07T20:28:53Z): COPY
.tenant-bundle-deps/plugins -> failed to calculate checksum ...
not found.
- staging run #964 (2026-05-07T20:41:52Z): hits the OLD in-image
clone path (staging hasn't picked up the Dockerfile.tenant
refactor yet via auto-sync) and fails on
'fatal: could not read Username for https://git.moleculesai.app'
when cloning the first private workspace-template-* repo.
Fix: add the same Pre-clone step to harness-replays.yml,
mirroring publish-workspace-server-image.yml. Uses AUTO_SYNC_TOKEN
(devops-engineer persona PAT) per
feedback_per_agent_gitea_identity_default.
Once auto-sync main->staging unblocks (sister agent fixing the
7-file conflict in flight), staging will inherit both this workflow
fix AND the Dockerfile.tenant refactor atomically.
Refs: #168, #173
Refs Task #165 (Class D AUTO_SYNC_TOKEN plumbing).
main and staging diverged after the 2026-05-06 GitHub-org suspension
because Class D / Class G / feature work landed on staging while
unrelated CI fixes (#34-47, ECR auth-inline, buildx→docker, pre-clone
manifest deps) landed straight on main. Both branches edited the
same workflow files, so every push to main triggered an Auto-sync
run that aborted at `git merge --no-ff origin/main` with 7 content
conflicts:
- .github/workflows/canary-verify.yml (URL: github.com → Gitea)
- .github/workflows/ci.yml (3 URL refs)
- .github/workflows/publish-runtime.yml (cascade: HTTP repo-dispatch
→ Gitea push)
- .github/workflows/publish-workspace-server-image.yml
(drop AWS-action steps;
ECR auth is inline)
- .github/workflows/retarget-main-to-staging.yml (URL)
- manifest.json (lowercase org slug + add
mock-bigorg from main)
- scripts/clone-manifest.sh (keep main's MOLECULE_GITEA_TOKEN
auth path + drop awk-tolower
since manifest is now lowercase)
Resolution: union — staging's post-suspension Gitea/ECR migrations win
on URL/policy edits; main's additive work (mock-bigorg manifest entry,
inline ECR auth, MOLECULE_GITEA_TOKEN basic-auth) is preserved on top.
After this lands, staging is a strict superset of main, so the next
auto-sync run on a push to main will be a clean fast-forward / no-op.
The auto-sync workflow on main also picks up staging's AUTO_SYNC_TOKEN
swap (Class D #26) for free, fixing the latent layer-2 push-auth issue.
Verified locally:
- bash -n scripts/clone-manifest.sh
- python -c 'yaml.safe_load(...)' on each touched workflow
- python -c 'json.load(open(manifest.json))' (21 plugins, 9 templates,
7 org_templates)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Run #1010 (post-#46) succeeded all the way to push but failed with
"repository molecule-ai/platform does not exist" — the platform image
ECR repo had never been created (only platform-tenant existed).
Created the repo via:
aws ecr create-repository --region us-east-2 \
--repository-name molecule-ai/platform \
--image-scanning-configuration scanOnPush=true
This is a one-line workflow comment to satisfy the path-filter and
re-run the publish workflow against the now-existing repo. Closes#173
properly this time — pre-clone + inline ECR auth + ECR repo all in
place.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
CI run #987 (post-#45) showed `docker push` from shell still hits
"no basic auth credentials" — `aws-actions/amazon-ecr-login@v2`
writes auth to a step-scoped DOCKER_CONFIG that doesn't carry across
to the next shell step on Gitea Actions.
Fix: drop both `aws-actions/configure-aws-credentials@v4` and
`aws-actions/amazon-ecr-login@v2`. Run `aws ecr get-login-password |
docker login` inline in the same shell step as `docker build` +
`docker push`. AWS creds come from secrets via env vars, ECR token
is fresh per-step (12h validity is plenty), config.json lives in the
same shell process — auth state is guaranteed.
This is the operator-host manual approach mapped 1:1 into CI.
runner-base image already has aws-cli + docker (verified locally).
Closes#173 (fifth piece — and final, this matches the manual flow
exactly).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
CI run #946 (post-#43) confirmed `driver: docker` doesn't fix the ECR
push 401 either: buildx CLI inside the runner container talks to the
operator-host docker daemon (mounted socket), but the daemon doesn't
see the runner's ECR auth state, and the runner's buildx CLI doesn't
attach the auth header in a way the daemon accepts.
Drop buildx + build-push-action entirely. Plain `docker build` +
`docker push` from the runner container works because both use the
SAME docker socket + the SAME runner-container config.json (populated
by `aws ecr get-login-password | docker login` from amazon-ecr-login).
Trade-off: lose multi-arch support. We only ship linux/amd64 tenant
images today, so this is fine. If multi-arch becomes a requirement
later, we can revisit (likely with `docker buildx create
--driver=remote` pointing at an external buildkit, but that's
substantial infra work; not worth it for a single-arch shop).
Closes#173 (fourth piece — and hopefully last; this matches the
operator-host manual approach exactly).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Empty-shape commit on a tests/harness/** path to trigger the harness-replays
workflow's path-filter on staging, verifying that:
- PR #40 (Class G #168) migrated all explicit github.com/Molecule-AI URL refs
- PR #42 (Class G #168 followup) migrated the indirect clone-manifest.sh + manifest.json forms
After this run, harness-replays should get past the previously-failing
'fatal: could not read Username for https://github.com' clone-manifest step.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
PR #38 + #41 fixed the Dockerfile-side clone issue. CI run #893 then
revealed two Gitea-Actions-specific issues with the unchanged buildx
config:
1. `failed to push: 401 Unauthorized` to ECR. Root cause: default
buildx driver `docker-container` spawns a buildkit container that
doesn't share the host's `~/.docker/config.json`, so the ECR auth
set up by amazon-ecr-login doesn't reach the push. Fix: pin
`driver: docker` so buildx delegates to the host daemon, which
already has the ECR creds.
2. `dial tcp ...:41939: i/o timeout` on `_apis/artifactcache/cache`.
Root cause: `cache-from/cache-to: type=gha` is GitHub-specific;
Gitea Actions has no compatible artifact-cache backend, so every
cache lookup fails after a 30s timeout. Fix: remove the cache-*
options. Cold-build cost is <10min for 37-repo clone + Go/Node
compile, acceptable. Could revisit with type=registry inline cache
later if rebuilds get painful.
With this + #38/#41, the workflow should run end-to-end on Gitea
Actions: pre-clone -> docker build (host daemon) -> ECR push.
Closes#173 (third and final piece).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The Class G #168 PR (#40) caught explicit `github.com/Molecule-AI/<repo>`
URL literals in 23 files but missed two indirect forms:
- `scripts/clone-manifest.sh` lines 50,52 had
`https://github.com/${repo}.git` (the org/repo path is a variable, so the
Class-G regex `github\.com/Molecule-AI/` didn't match).
- `manifest.json` had `"Molecule-AI/<repo>"` (no `github.com` prefix; the
prefix gets prepended by the script).
Together these are what `Dockerfile.tenant`'s stage-3 templates RUN
actually fetches. After PR #40 the harness-replays workflow against
staging still fails with `fatal: could not read Username for
'https://github.com'` because the in-image build is the unfixed shell
loop.
This PR:
- scripts/clone-manifest.sh: replaces both clone URLs with
`https://git.moleculesai.app/${repo}.git`. Anonymous public clones
work for these repos (verified manually).
- manifest.json: lowercases `Molecule-AI/` to `molecule-ai/` to match
Gitea's canonical org slug. Gitea is case-insensitive so both work,
but the lowercase form matches every other URL in the org and is
what main's clone-manifest.sh (PR #38) already standardises on.
This is the minimum-diff staging fix. Sister #173 already shipped a
more sophisticated version on main (with optional MOLECULE_GITEA_TOKEN
auth + per-build pre-clone). When auto-sync resolves the staging-vs-main
conflict, this minimal version gets superseded by the main version
naturally.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The first PR (#38) only patched Dockerfile.tenant — but the workflow
also builds the platform image from workspace-server/Dockerfile, which
had the SAME in-image `git clone` stage. Build run #794 caught this:
"process clone-manifest.sh ... exit code 128" on the platform image.
Apply the same pre-clone shape to the platform Dockerfile: drop the
`templates` stage, COPY from .tenant-bundle-deps/ instead. The
workflow's existing "Pre-clone manifest deps" step (added in #38)
already populates .tenant-bundle-deps/ before either build runs, so no
workflow change needed.
Self-review note: the missed-platform-Dockerfile is a Phase 1 quality
miss — I read both files but only registered the tenant one as
in-scope. Saved memory `feedback_orchestrator_must_verify_before_declaring_fixed`
applies: should have grepped the whole workspace-server/ for "templates"
stages before claiming Task #173 done. CI run #794 caught it within
~6 minutes; net cost: one followup commit.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
TestPooledWithEICTunnel_PreservesFnErr (and any sqlmock-using neighbour
test) was at risk of inheriting stale INSERT calls from a previous
test's coalesceRestart goroutine that survived its t.Cleanup boundary.
The production callsite shape is `go h.RestartByID(...)` from
a2a_proxy.go, a2a_proxy_helpers.go and main.go. When that goroutine's
runRestartCycle panics, coalesceRestart's deferred recover swallows it
to keep the platform process alive — but in tests, nothing waits for
the goroutine to fully exit. If it's still draining LogActivity-shaped
work after the test returns, those INSERTs land in the next test's
sqlmock connection as kind=DELEGATION_FAILED /
kind=WORKSPACE_PROVISION_FAILED, surfacing as "INSERT-not-expected".
Fix: introduce drainCoalesceGoroutine(t, wsID, cycle) test helper that
spawns coalesceRestart on a goroutine (matching production) and
registers a t.Cleanup with sync.WaitGroup.Wait so the test can't
declare itself done while a goroutine is still alive.
Convert TestCoalesceRestart_PanicInCycleClearsState to use the helper
(previously it called coalesceRestart synchronously, which never
exercised the production goroutine-survival contract).
Add TestCoalesceRestart_DrainHelperWaitsForGoroutineExit as the
regression guard: cycle blocks 150ms then panics; the test asserts
t.Run elapsed >= 150ms (proving the Wait barrier engaged) AND the
deferred close ran (proving the panic-recovery defer chain executed)
AND state.running was cleared. Verified the assertion is real by
mutation-testing: removing t.Cleanup(wg.Wait) makes this test FAIL
deterministically with elapsed <300µs.
Per saved memory feedback_assert_exact_not_substring: the regression
test asserts an exact-shape contract (elapsed >= blockFor) rather than
a substring-in-output, so it discriminates between "drain works" and
"drain skipped".
Per Phase 3: 10/10 race-detector runs pass for all TestCoalesceRestart_*
tests. Full ./internal/handlers/... suite green.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The GitHub org Molecule-AI was suspended on 2026-05-06; canonical SCM
is now Gitea at https://git.moleculesai.app/molecule-ai/. Stale
github.com/Molecule-AI/... URLs return 404 and break tooling that
clones / pip-installs / curls them.
This bundles all non-Go-module URL fixes for this repo into a single PR.
Go module path references (in *.go, go.mod, go.sum) are out of scope
here -- tracked separately under Task #140.
Token-auth clone URLs also flip ${GITHUB_TOKEN} -> ${GITEA_TOKEN} since
the GitHub token does not auth against Gitea.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
TestPooledWithEICTunnel_PreservesFnErr (and any sqlmock-using neighbour
test) was at risk of inheriting stale INSERT calls from a previous
test's coalesceRestart goroutine that survived its t.Cleanup boundary.
The production callsite shape is `go h.RestartByID(...)` from
a2a_proxy.go, a2a_proxy_helpers.go and main.go. When that goroutine's
runRestartCycle panics, coalesceRestart's deferred recover swallows it
to keep the platform process alive — but in tests, nothing waits for
the goroutine to fully exit. If it's still draining LogActivity-shaped
work after the test returns, those INSERTs land in the next test's
sqlmock connection as kind=DELEGATION_FAILED /
kind=WORKSPACE_PROVISION_FAILED, surfacing as "INSERT-not-expected".
Fix: introduce drainCoalesceGoroutine(t, wsID, cycle) test helper that
spawns coalesceRestart on a goroutine (matching production) and
registers a t.Cleanup with sync.WaitGroup.Wait so the test can't
declare itself done while a goroutine is still alive.
Convert TestCoalesceRestart_PanicInCycleClearsState to use the helper
(previously it called coalesceRestart synchronously, which never
exercised the production goroutine-survival contract).
Add TestCoalesceRestart_DrainHelperWaitsForGoroutineExit as the
regression guard: cycle blocks 150ms then panics; the test asserts
t.Run elapsed >= 150ms (proving the Wait barrier engaged) AND the
deferred close ran (proving the panic-recovery defer chain executed)
AND state.running was cleared. Verified the assertion is real by
mutation-testing: removing t.Cleanup(wg.Wait) makes this test FAIL
deterministically with elapsed <300µs.
Per saved memory feedback_assert_exact_not_substring: the regression
test asserts an exact-shape contract (elapsed >= blockFor) rather than
a substring-in-output, so it discriminates between "drain works" and
"drain skipped".
Per Phase 3: 10/10 race-detector runs pass for all TestCoalesceRestart_*
tests. Full ./internal/handlers/... suite green.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Same shape as molecule-controlplane#29: per-job GITHUB_TOKEN
doesn't have the Gitea API permissions to open PRs / push branches
the auto-sync flow needs. AUTO_SYNC_TOKEN is the devops-engineer
persona PAT (per saved memory feedback_per_agent_gitea_identity_default).
Companion prod ops (already done):
- devops-engineer added as collaborator on molecule-core (write)
- devops-engineer added to staging branch protection push_whitelist
- AUTO_SYNC_TOKEN registered as Actions secret on molecule-core
All current core/staging reds ran 12:14-12:33 BEFORE the runner
image swap (cloudflared bake + GOPROXY pipe-separator at 12:55).
This empty commit forces a fresh CI run under the post-fix
runner image so we can categorize:
- REAL fails (need targeted fix)
- STALE-cleared (was a runner-image issue, now fixed)
- Genuinely unrelated (Auto-sync, CodeQL — Hongming-parked)
Per feedback_orchestrator_must_verify_before_declaring_fixed,
don't mass-mark stale — wait for fresh run, verify each context.
Two coupled cleanups for the post-2026-05-06 stack:
#157 — drop molecule-ai-plugin-github-app-auth
============================================
The plugin injected GITHUB_TOKEN/GH_TOKEN via the App's
installation-access flow (~hourly rotation). Per-agent Gitea
identities replaced this approach after the 2026-05-06 suspension —
workspaces now provision with a per-persona Gitea PAT from .env
instead of an App-rotated token. The plugin code itself lived on
github.com/Molecule-AI/molecule-ai-plugin-github-app-auth which is
also unreachable post-suspension; checking it out at CI build time
was already failing.
Removed:
- workspace-server/cmd/server/main.go: githubappauth import + the
`if os.Getenv("GITHUB_APP_ID") != ""` block that called
BuildRegistry. gh-identity remains as the active mutator.
- workspace-server/Dockerfile + Dockerfile.tenant: COPY of the
sibling repo + the `replace github.com/Molecule-AI/molecule-ai-
plugin-github-app-auth => /plugin` directive injection.
- workspace-server/go.mod + go.sum: github-app-auth dep entry
(cleaned up by `go mod tidy`).
- 3 workflows: actions/checkout steps for the sibling plugin repo:
- .github/workflows/codeql.yml (Go matrix path)
- .github/workflows/harness-replays.yml
- .github/workflows/publish-workspace-server-image.yml
Verified `go build ./cmd/server` + `go vet ./...` pass post-removal.
#161 — swap GHCR→ECR for publish-workspace-server-image
=======================================================
Same workflow used to push to ghcr.io/molecule-ai/platform +
platform-tenant. ghcr.io/molecule-ai is gone post-suspension. The
operator's ECR org (153263036946.dkr.ecr.us-east-2.amazonaws.com/
molecule-ai/) already hosts platform-tenant + workspace-template-*
+ runner-base images and is the post-suspension SSOT for container
images. This PR aligns publish-workspace-server-image with that
stack.
- env.IMAGE_NAME + env.TENANT_IMAGE_NAME repointed to ECR URL.
- docker/login-action swapped for aws-actions/configure-aws-
credentials@v4 + aws-actions/amazon-ecr-login@v2 chain (the
standard ECR auth pattern; uses AWS_ACCESS_KEY_ID/SECRET secrets
bound to the molecule-cp IAM user).
The :staging-<sha> + :staging-latest tag policy is unchanged —
staging-CP's TENANT_IMAGE pin still points at :staging-latest, just
with the new registry prefix.
Refs molecule-core#157, #161; parallel to org-wide CI-green sweep.
The README hadn't been refreshed since the v0 wave. Several major
shipped surfaces weren't called out (Canvas v4 warm-paper theme,
Memory v2 with pgvector, RFC #2967 typed-SSOT A2A response path,
the SaaS control plane, the molecule-mcp-claude-channel plugin we
just shipped via v0.4.0/0.4.1/0.4.2). The runtime list still said
"6" when 8 are in production. The icon was a 1.3 MB PNG with no
light-mode variant.
- New `docs/assets/branding/molecule-icon.svg` matches the landing
page's `public/favicon.svg` shape (5-spoke molecular graph) but
carries `prefers-color-scheme` styles so it adapts to GitHub's
light/dark modes. The PNG stays for back-compat with anything
that hotlinks it.
- `docs/assets/branding/molecule-logo.svg` adds a wordmark variant
for places that want the brand name alongside the icon.
- README hero replaces the PNG `<img>` with the SVG so contributors
reading on GitHub light see a tinted version that doesn't blow
out the page background.
- **8 production runtimes** named explicitly throughout: Claude
Code, Hermes, Gemini CLI, LangGraph, DeepAgents, CrewAI, AutoGen,
OpenClaw. Comparison table grew Hermes 4 + Gemini CLI rows with
the integration mechanism (Option B upstream hook, A2A bridge,
multi-provider derivation).
- **Canvas v4** — warm-paper theme system (light / dark / follow-
system) called out alongside the existing Next.js 15 / React Flow /
Zustand stack.
- **Memory v2 backed by pgvector** — semantic recall callout in
both the "memory model" pitch line and the runtime stack section.
- **RFC #2967 typed-SSOT A2A response path** named in the platform
ship list + architecture diagram.
- **SaaS surface section** added — multi-tenant EC2 + Neon +
Cloudflare Tunnels, WorkOS + Stripe, KMS envelope, tenant_resources
audit + 30-min reconciler. Cross-links to molecule-controlplane.
- **molecule-mcp-claude-channel plugin** added — entry point for
Claude Code users to bridge A2A traffic into a local session via
MCP. Documents the standard marketplace install flow + multi-
tenant config.
- **Architecture diagram** redrawn with Canvas → Platform → Postgres
+ Provisioner (Docker | EC2+SSM) layout, plus a SaaS control plane
block.
- **Quick Start** repo URL fixed (`molecule-monorepo` → `molecule-core`),
Go version bumped to 1.25, Python ≥3.11 noted.
- Deploy buttons + Quick Start URL all bump from the old
`molecule-monorepo` name to the current `molecule-core`. Pre-fix
these clicked through to a 404.
The provisioner refactor (`registry.go` deletion + RegistryPrefix
env-driven changes) that lived alongside an earlier draft of this
README on the `docs/readme-refresh-2026-05-06` branch is OUT of
this PR — that work shipped separately via #6. This branch is
docs-only so the review surface is small and the merge is reversible.
- `git diff staging --stat`:
```
README.md | 75 +++++++++++++++++++++++-----------
docs/assets/branding/molecule-icon.svg | 28 +++++++++++++
docs/assets/branding/molecule-logo.svg | 17 ++++++++
3 files changed, 97 insertions(+), 23 deletions(-)
```
- SVGs validated in a browser at light + dark `prefers-color-scheme`.
- All linked docs (./docs/index.md, ./docs/quickstart.md, ./docs/
architecture/architecture.md, ./docs/api-protocol/platform-api.md,
./docs/agent-runtime/workspace-runtime.md, ./LICENSE, etc.) verified
to exist on staging.
- README.zh-CN.md mirror — non-trivial translation work; file as
separate issue if mirror is wanted.
- molecule-ai/.github org-profile README — Gitea has no equivalent
to GitHub's org-profile surface, and the GitHub org is suspended.
Skipped.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
The v2 dropped codex from TEMPLATES on the basis of "no
publish-image.yml = not part of cascade today." That was correct
about the immediate behavior but tripped cascade-list-drift-gate.yml
because manifest.json still declares codex (it IS a live runtime —
referenced from workspace/config.py and cloned into dev envs by
clone-manifest.sh; only the image-publish path is missing).
Restore codex to TEMPLATES (matching manifest) and add a runtime
soft-skip: probe each repo for .github/workflows/publish-image.yml
via the Gitea contents API and skip cleanly if 404. Final job log
distinguishes "complete across all" vs "complete with soft-skips".
This preserves the drift gate's invariant (TEMPLATES == manifest)
while honoring the empirical fact that codex has no publish-image
workflow yet. If codex later gains the workflow, no change here is
needed — the probe will see 200 and the cascade will fan out to it
naturally.
Refs molecule-core#14, molecule-core#20.
Empirical blocker on v1: Gitea 1.22.6 has no repository_dispatch /
workflow_dispatch trigger API (verified across 6 candidate paths in
issuecomment-913). v1's curl-POST loop would always exit-1.
v2 pivots to push-mode: each template repo got a small companion PR
(merged 2026-05-07) adding a `.runtime-version` file at root + a
`resolve-version` job in publish-image.yml that reads the file and
forwards the value to the reusable build workflow. publish-runtime
now updates that file via git-clone + commit + push, which trips
each template's existing `on: push: branches: [main]` trigger.
Behaviour changes vs v1:
- Templates list dropped from 9 → 8 (codex has no publish-image.yml
so was never part of the cascade in practice).
- 3-retry pull-rebase loop per template (handles concurrent-push
races without force-push). Failures collected, job exits 1 with
the failed-template list at the end.
- Idempotency: when re-run with the same version, templates already
pinned to that version contribute zero commits — operator can
safely re-run to retry partial failures.
- Author line: "publish-runtime cascade <publish-runtime@moleculesai
.app>" trailer makes it clear the commit is workflow-driven, not
human (per memory feedback_github_botring_fingerprint).
DISPATCH_TOKEN secret name unchanged (still consumed at
secrets.DISPATCH_TOKEN per 569df259).
Refs molecule-core#14, builds on molecule-core#20 issuecomment-923
(Phase 2 design).
The cascade workflow was reading from `secrets.TEMPLATE_DISPATCH_TOKEN`
but the plumbed secret name is `DISPATCH_TOKEN` (verified just now via
GET /repos/molecule-ai/molecule-core/actions/secrets — only DISPATCH_TOKEN
is set). Without this rename the cascade would always evaluate "secret
missing" and exit 1 on the next push to staging, defeating the entire
point of grant-role-access.sh --apply that just landed.
Three references updated:
- env mapping (`secrets.X` → `secrets.DISPATCH_TOKEN`)
- workflow_dispatch warning text
- push-trigger error text
The bash-side variable name is unchanged (still `DISPATCH_TOKEN`) so
the curl invocation at line 372 is unaffected. YAML round-trip parses
clean.
## Symptom
`publish-runtime.yml::cascade` fired a `repository_dispatch` to 10 workspace-template
repos via direct curl to `https://api.github.com/repos/...`. Post-2026-05-06 the
org's GitHub presence is suspended; every invocation 404s. The job's
`:⚠️:` posture meant the failure didn't propagate, leaving the runtime
PyPI publish → template image rebuild pipeline silently broken.
## Why Option A (rewrite) and not Option B (delete)
Verified 2026-05-07 by devops-engineer (molecule-core#14 thread):
- The cron-poll mechanism (/etc/cron.d/molecule-deploy-poll) tracks ONLY the
Vercel/Railway-deployed repos (landingpage/docs/molecule-app/molecules-market
/molecule-controlplane). It does NOT track workspace-template-* repos.
- Each of the 9 template `publish-image.yml` workflows has
`repository_dispatch: types: [runtime-published]` as a load-bearing trigger.
Without the cascade, when the runtime ships a new PyPI version, templates
don't auto-rebuild.
So Option B (delete) would silently break the runtime → template fan-out.
Option A (rewrite to Gitea's API shape) is the right call. Security-auditor
agreed after seeing the cron-poll TRACKED list.
## API surface change
| Concern | Pre-fix (GitHub) | Post-fix (Gitea) |
|---|---|---|
| URL | `https://api.github.com/repos/$REPO/dispatches` | `${GITEA_URL}/api/v1/repos/$REPO/dispatches` |
| Owner case | `Molecule-AI/...` | `molecule-ai/...` (lowercase, Gitea is case-sensitive) |
| Auth header | `Authorization: Bearer $DISPATCH_TOKEN` | `Authorization: token $DISPATCH_TOKEN` |
| Body shape | `{event_type, client_payload}` | UNCHANGED — Gitea is GitHub-compatible here |
| Success code | `204 No Content` | `204 No Content` (unchanged) |
`GITEA_URL` defaults to `https://git.moleculesai.app`; overridable via job env.
## Out-of-band: DISPATCH_TOKEN secret rotation
The DISPATCH_TOKEN secret was a GitHub PAT. It must be re-minted as a Gitea
PAT for the new API to authenticate. Per saved memory
`feedback_per_agent_gitea_identity_default`, this should be a dedicated
`publish-runtime-bot` persona token with `write:repository` scope on the
9 target repos — NOT the founder PAT.
This PR ships the workflow change. Token rotation is the operator-host
follow-up (security-auditor's lane) — coordinate the merge so the token
is in place before the next runtime release fires.
## Backwards compatibility
The workflow ran silently-broken since 2026-05-06 (every invocation 404
+ :⚠️: but no failure). So there is no functional regression from
"silently broken" to "actually working". Any in-progress operator-managed
manual dispatch path is unaffected; the Gitea API parallel path doesn't
require operator intervention.
## Test plan
- [x] YAML parse OK on the modified workflow file
- [ ] Smoke test: trigger a runtime publish (or simulate via dispatching to one
template) post-merge; verify HTTP 204 + the template's publish-image
workflow fires + the template's image gets re-pushed against the new
runtime version. Phase 4 verification belongs to internal#46 follow-up.
## Hostile self-review (3 weakest spots)
1. The fan-out remains all-or-nothing: a single template failure surfaces as
a `:⚠️:` but PyPI publish proceeds. With 9 templates this is a
~10% per-template chance of stale-image-on-runtime-bump if any one fails.
Defense: the warning shows up in the workflow summary; operators retry.
Future hardening: requeue-on-fail with bounded retry, or a separate
reconcile cron that detects template/runtime version drift and re-dispatches.
2. `DISPATCH_TOKEN` validity is enforced by the Gitea API (401 on stale)
but the workflow doesn't differentiate 401 from 404. Either way the
warning fires. Future hardening: explicit token-shape check at the start
of the cascade job (curl `/api/v1/user` once, fail-fast if 401).
3. Owner-case lowercase is right today but couples the workflow to the
current Gitea org slug. If the org is ever renamed, this workflow
breaks silently. Less fragile alternative: derive REPO from a
canonical config (e.g. `gh repo list molecule-ai`) instead of
string-concatenating. Acceptable today; filed as the same future
hardening pass as item 1.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Mechanical pin: 4 `actions/upload-artifact@v4.6.2/v7.0.1` uses → `@v3`. v4+/v7+
rely on a runtime API shape that Gitea's act_runner v0.6.x doesn't fully
support. v3 uses the legacy server protocol act_runner ships end-to-end.
Files (4 uses):
- .github/workflows/ci.yml:238 (v4.6.2 → v3)
- .github/workflows/codeql.yml:124 (v7.0.1 → v3)
- .github/workflows/e2e-staging-canvas.yml:142 (v7.0.1 → v3)
- .github/workflows/e2e-staging-canvas.yml:150 (v7.0.1 → v3)
YAML parse green on all 3 files.
Sister PRs land for `molecule-controlplane` and `codex-channel-molecule`.
Per internal#46 Phase 2 audit; tracked under that umbrella.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Per documentation-specialist's grep agent (2026-05-07T07:30, see
internal#46): runtime-breaking ghcr.io references in shell scripts +
docker-compose + the slip-past-workflow lint_secret_pattern_drift.py
all need migration. These were missed by security-auditor's
workflow-only audit.
Files (6):
- .github/scripts/lint_secret_pattern_drift.py:40 — workspace-runtime
pre-commit-checks.sh consumer URL: raw.githubusercontent.com →
Gitea raw URL (https://git.moleculesai.app/molecule-ai/.../raw/
branch/main/...). The lint job runs in CI and would 404 today.
- scripts/refresh-workspace-images.sh:54 — workspace-template image
pull URL: ghcr.io → ECR (153263036946.dkr.ecr.us-east-2.amazonaws.com).
- scripts/rollback-latest.sh — full rewrite of header + auth flow:
* ghcr.io/molecule-ai/{platform,platform-tenant} → ECR
* GITHUB_TOKEN with write:packages → AWS ECR auth
(aws ecr get-login-password). Per saved memory
reference_post_suspension_pipeline, prod cutover is to ECR.
* Updated header docs to match new auth flow + prereqs.
- scripts/demo-freeze.sh:13,17 — comment-only ghcr → ECR
(the script doesn't currently exec these URLs, but the comments
describe the cascade and need to match reality).
- docker-compose.yml:215-216 — canvas image: ghcr.io → ECR + updated
the auth comment to describe `aws ecr get-login-password` flow.
- tools/check-template-parity.sh:21 — inline curl install instructions:
raw.githubusercontent.com → Gitea raw URL.
Hostile self-review:
1. rollback-latest.sh's GITHUB_TOKEN→aws-cli auth swap is a behavior
change. Operators using this script now need aws CLI
authenticated for region us-east-2 with ECR pull/push perms.
Documented in updated header. Operators who don't have aws CLI
will get 'aws: command not installed' which is a clear failure
mode (not silent).
2. The Gitea raw URL shape (/raw/branch/main/) differs from GitHub's
raw.githubusercontent.com structure. Verified pattern by
inspecting other Gitea raw URLs in the codebase. If Gitea's URL
changes (1.23+), update via the same one-line edit.
3. Doesn't touch packer/scripts/install-base.sh which has a similar
ghcr.io ref per the grep agent's findings — that's bigger-scope
(packer build pipeline) and lives in molecule-controlplane-ish
territory; filing as parked follow-up under #46 if not already.
Refs: molecule-ai/internal#46, molecule-ai/internal#37,
molecule-ai/internal#38, saved memory reference_post_suspension_pipeline
The molecule-ai-workspace-runtime mirror is regenerated on every
runtime-v* tag from this monorepo's workspace/. Per saved memory
reference_runtime_repo_is_mirror_only, mirror-guard rejects direct
PRs to the mirror; edit at source.
Source-side files that propagate to the mirror's published README +
read by users of the in-monorepo workspace-runtime docs:
- scripts/build_runtime_package.py (the README generator):
* line 281 README_TEMPLATE: 'Shared workspace runtime for Molecule
AI' link → Gitea
* line 399 doc-link to workspace-runtime-package.md → Gitea path
(with /src/branch/main/ shape)
LEFT AS-IS (per Q3 audit-trail decision):
* lines 379, 392 historical issue cross-refs (#2936, #2937)
- workspace/build-all.sh:5 — comment block linking to template-*
repos. Migrated to Gitea path-shape.
- docs/workspace-runtime-package.md:
* lines 101-108 adapter→repo table (8 templates, all PUBLIC on
Gitea) — Gitea URLs
* line 247 starter-repo link — substituted host + added inline
note that starter doesn't survive the suspension migration
(recreation pending; cross-link to this issue)
* line 259 generic git clone command for new templates → Gitea
* line 289 second starter mention — same handling as 247
Files NOT touched in this PR:
- workspace/ Python source code (.py files) — those use github
paths in docstrings + a few log strings; fix bundled with the
cross-repo Go-module-style migration (per #37 Q5 + parked
follow-ups).
- 'Writing a new adapter' section's `gh repo create` command (line
254-256) — gh CLI doesn't talk to Gitea (per #45 parked follow-up).
- 'Writing a new adapter' section's ghcr.io image ref (line 276) —
per #46 ghcr→ECR migration (separate concern).
After this PR merges to staging + a runtime-v* tag is pushed, the
mirror's published README will inherit the Gitea link. Until then
the mirror's README continues to reference github.com/Molecule-AI
(stale but historical-marker-correct since the mirror existed
pre-suspension).
Refs: molecule-ai/internal#41, molecule-ai/internal#37,
molecule-ai/internal#38, molecule-ai/internal#42,
molecule-ai/internal#45, molecule-ai/internal#46
## Symptom
Canvas detail-panel "config + filesystem load" took ~20s. Reported on
production hongming tenant, workspace c7c28c0b-... (Claude Code Agent T2).
## Two stacked latency sources
### 1. Server-side: per-call EIC tunnel setup (~80% of the win)
`workspace-server/internal/handlers/template_files_eic.go::realWithEICTunnel`
performed ssh-keygen + SendSSHPublicKey + open-tunnel + waitForPort PER call.
4 callers (read/write/list/delete) each paid the full ~3-5s setup cost even
when fired back-to-back on the same workspace EC2.
Fix: refcounted pool keyed on instanceID with TTL ≤ 50s (under the 60s
SendSSHPublicKey grant). One tunnel serves N file ops; concurrent acquires
for the same instance share the slot via a pendingSetups gate; LRU eviction
caps simultaneous tracked instances at 32. Poisons entries on tunnel-fatal
errors (connection refused, broken pipe, auth failed) so the next acquire
builds fresh. Cleanup on panic via defer-release pattern (added after
self-review caught a refcount-leak hazard).
Public API unchanged — `var withEICTunnel` rebinds to `pooledWithEICTunnel`
at package init, so all 4 callers inherit pooling for free.
10 unit tests pin: 4-ops-amortise (1 setup), different-instances-do-not-share,
TTL eviction, poison invalidates, concurrent-acquire-single-setup,
TTL=0 escape hatch, LRU eviction at cap, error classification heuristic,
refcount blocks expired eviction, panic poisons entry. All green.
### 2. Canvas-side: serial fan-out + duplicate fetch (~20% of the win)
`canvas/src/components/tabs/ConfigTab.tsx::loadConfig` awaited 3 independent
metadata GETs (`/workspaces/{id}`, `/model`, `/provider`) serially.
`AgentCardSection` fired a SECOND `/workspaces/{id}` from its own useEffect.
Fix: Promise.all over the 3 metadata GETs (each leg keeps its existing
.catch fallback semantics). AgentCardSection now reads `agentCard` from
the canvas store (`useCanvasStore`) instead of refetching — the canvas
already hydrates `node.data.agentCard` from the platform event stream.
Defensive selector handles test mocks without a `nodes` array.
## Verification
- `go test ./internal/handlers/` 5.07s green (full handlers package, including
10 new pool tests)
- `go vet ./internal/handlers/` clean
- `npx vitest run` — 1380/1380 canvas unit tests pass (2 test FILES fail on
a pre-existing xyflow CSS-load issue in vitest config, unrelated to this
change)
- `npx tsc --noEmit` clean
Live wall-time verification deferred to Phase 4 / E2E (canvas browser session
required; external probe blocked by 403 since the canvas auth chain is
session-cookie + Origin header, not a bearer token I can fabricate).
## Backwards compatibility
API surface unchanged. All 4 EIC handler callers use the rebound var; no
caller migration. Pool defaults to enabled (TTL=50s); tests can disable by
setting poolTTL=0 or by overwriting withEICTunnel directly (existing stub
pattern in template_files_eic_dispatch_test.go preserved).
## Hostile self-review (3 weakest spots)
1. `fnErrIndicatesTunnelFault` is a substring grep on err.Error() — the
marker list is hand-curated and ssh client error formats vary across
OpenSSH versions. A future ssh that reports a tunnel failure via a
phrasing not in the list would NOT poison the entry → next callers reuse
a dead tunnel until TTL evicts. Acceptable: TTL bounds the impact (≤50s
of bad reuse), and the heuristic covers every tunnel-error shape that
appears in the existing test fixtures and known incidents.
2. `acquire`'s for-loop has unbounded retry potential under pathological
churn (signal closed → new acquirer → setup fails → repeat). No bounded
retry counter. Today there is no test exercise for "flaky setup that
succeeds-then-fails-then-succeeds"; if observability ever shows this
shape, add a max-retry guard. Filed as a known limitation, not blocking.
3. The substring assertion `strings.Contains` style I used for tunnel-fault
classification could false-positive on app-level error messages that
happen to contain "permission denied" or "broken pipe" verbatim. The
classification test covers the discriminator but only against the
error shapes we know today. Acceptable: poisoning errs on the side of
building fresh, which is correct-but-slightly-slow rather than incorrect.
## Phase 4 / E2E plan
- Live timing of the canvas detail-panel open against a real workspace
(browser session, not external probe).
- Target: perceived latency under 2s on warm pool. Cold open still pays
one tunnel setup (~3-5s) — the pool buys you the SECOND through Nth
panel-open within the TTL window.
- Memory `feedback_chase_verification_to_staging` applies — will not
declare done at PR-merge; will follow through to user-visible behavior
on staging.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the SSOT story shipped in PR-C/D: canvas now consumes the typed
/chat-history endpoint instead of /activity?type=a2a_receive, and the
server emits messages in display-ready chronological order so the
client doesn't have to re-order them.
## Canvas (consumer migration)
- loadMessagesFromDB swaps from /activity to /chat-history.
- Drops type=a2a_receive + source=canvas params (server applies the
filter centrally now).
- Drops [...activities].reverse() — wire is already display-ready.
- Drops the local INTERNAL_SELF_MESSAGE_PREFIXES constant +
isInternalSelfMessage helper. Server-side IsInternalSelfMessage
applies the same predicate before emitting rows.
- Drops the activityRowToMessages + ActivityRowForHydration imports
from historyHydration.ts. The TS parser stays in tree because
message-parser.ts is still load-bearing for live A2A WebSocket
messages (ChatTab.tsx:805, AgentCommsPanel.tsx, canvas-events.ts).
## Server (row-aware wire-order fix)
The pre-PR-C-2 client did `[...activities].reverse()` over ROWS, then
flattened each row into [user, agent] messages. The reversal was
ROW-aware. After PR-C/D, the server returned a flat ChatMessage slice
in `ORDER BY created_at DESC` order, with [user, agent] within each
row. A naive client-side flat reverse would FLIP each pair (agent
before user at same timestamp).
Two ways to fix it:
A) Server emits oldest-first within page; canvas does NOT reverse.
B) Canvas does row-aware reversal (group by timestamp, reverse).
Option A is cleaner — server owns the wire-order responsibility, every
client trusts `for m of messages` to render chronologically. Server
adds reverseRowChunks() that:
1. Groups consecutive same-Timestamp messages into row chunks
(1-2 messages per row).
2. Reverses the chunk order (newest-row-first → oldest-row-first).
3. Flattens. Within-chunk [user, agent] order is preserved.
Single-message rows (agent reply not yet recorded, attachments-only
user upload) collapse to 1-element chunks and reverse correctly too.
## Tests
Server: 3 new unit tests on reverseRowChunks (paired across rows,
single-message rows, empty input) + 1 sqlmock integration test on
List() that drives the full SQL → reverse → wire path. Mutation-tested:
removed `messages = reverseRowChunks(messages)` from List(), confirmed
the integration test fires red with all 4 misordered indices flagged.
Restored, all 25 messagestore tests + 9 chat-history handler tests
green.
Canvas: 8 lazyHistory pagination tests refactored to mock
/chat-history (not /activity) and assert against the new wire shape
({messages, reached_end} not raw activity rows). All 1389/1389 vitest
tests green; tsc --noEmit clean.
## Three weakest spots (hostile-reviewer self-pass)
1. reverseRowChunks groups by Timestamp string equality. If two
distinct rows had the SAME timestamp (legitimately possible at sub-
millisecond granularity), the algorithm would treat them as one
chunk and not reverse them relative to each other. Mitigated:
activity_logs.created_at uses microsecond resolution; concurrent
inserts at exact-same microsecond are vanishingly rare. If a
collision happens, the within-chunk order is whatever the SQL
returned — both rows render at the same timestamp, no user-visible
misordering.
2. The pre-existing TS parser files (historyHydration.ts +
message-parser.ts) stay in tree. historyHydration.ts is now dead
code (no consumers post-migration); deletion is parked as a follow-
up after a one-week observation window confirms no live-message
consumer reaches it.
3. canvas's loadMessagesFromDB returns `resp.messages ?? []`. If the
server were ever to return `null` instead of `[]` (it currently
doesn't — handler defensively coerces nil to []), the nullish coalesce
keeps the canvas from crashing. A stricter wire schema would assert
the never-null invariant; for today's pragmatic safety, the ?? is
enough.
## Security review
- Untrusted input? Same as PR-C — agent JSON parsed defensively in
the messagestore parser. No new exposure.
- Trust boundary? Same. Canvas → /chat-history → wsAuth → messagestore.
- Output sanitization? Plain text + opaque attachment URIs as before.
No security-relevant changes beyond what /chat-history already
exposes via PR-C. Considered, not skipped.
## Versioning / backwards compat
- /activity endpoint unchanged.
- /chat-history endpoint shape unchanged (still {messages, reached_end});
only the wire ORDER within a page changed (newest-first row → oldest-
first row). Canvas is the only consumer in tree; no API consumers
depend on the previous order.
- canvas's loadMessagesFromDB call signature unchanged — internal
refactor.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
The deprovision path marks `workspaces.status='removed'` BEFORE calling
the controlplane DELETE. If that CP call fails (transient 5xx, network
hiccup, AWS provider error), the DB row stays at 'removed' with
`instance_id` populated and there's no retry — the EC2 lives forever.
9 prod orphans accumulated over 3 days under this bug.
Adds a SaaS-mode counterpart to the existing Docker `orphan_sweeper`:
- 60s tick (matches the Docker sweeper cadence)
- LIMIT 100 per cycle so a sustained CP outage drains over multiple
cycles without blowing the request timeout
- Re-issues `cpProv.Stop` for any workspace at status='removed' with a
non-NULL `instance_id`. Stop is idempotent (AWS terminate on
already-terminated is a no-op; CP's Deprovision tolerates already-
deleted DNS) so retries are safe.
- On Stop success, NULLs `instance_id` so the next cycle skips the row.
- On Stop failure, leaves `instance_id` populated for next cycle.
The existing Docker sweeper is gated on `prov != nil`; the new sweeper
is gated on `cpProv != nil`. SaaS tenants get exactly one of the two,
self-hosted tenants get the Docker one — no overlap.
Why this shape over option A (CP-first ordering) or B (durable outbox):
the existing inline path already returns a loud 500 to the user when
CP fails — the only missing piece is automatic retry, which a 60s
sweeper provides without protocol changes, new tables, or new workers.
~30 LOC of production code vs. ~400 for an outbox. RFC discussion in
#2989 comment chain.
Tests:
- 9 unit tests covering happy path, Stop failure, UPDATE failure,
multiple orphans (one-fails-others-still-process), DB query error,
nil-DB defense, nil-reaper short-circuit, and the boot-immediate-then-
tick cadence contract.
- Mutation-tested: status='running' substitution and removed-UPDATE-
block both fail at least one test.
Out of scope:
- Backfilling the 9 named orphans — they'll heal automatically on the
first sweep cycle after this lands; no manual cleanup needed.
- Long-term durable-outbox architecture — separate RFC.
2026-05-06 16:43:33 -07:00
164 changed files with 13008 additions and 2548 deletions
echo" [debug] teams-list body (first 300 chars):" >&2
head -c 300"$ORG_TEAMS_FILE" >&2;echo >&2
fi
if["$HTTP_CODE" !="200"];then
echo"::error::GET /orgs/${OWNER}/teams returned HTTP $HTTP_CODE — token likely lacks read:org scope. Add a SOP_TIER_CHECK_TOKEN secret with read:organization scope at the org level."
exit1
fi
declare -A TEAM_ID
for T in $ELIGIBLE;do
ID=$(jq -r --arg t "$T"'.[] | select(.name==$t) | .id' <"$ORG_TEAMS_FILE"| head -1)
debug "probe: $U in team $T (id=$ID) → HTTP $CODE"
if["$CODE"="200"]||["$CODE"="204"];then
echo"::notice::approver $U is in team $T (eligible for $TIER)"
OK="yes"
break
fi
done
[ -n "$OK"]&& break
done
if[ -z "$OK"];then
echo"::error::Tier $TIER requires approval from a non-author member of {$ELIGIBLE}. Got approvers: $APPROVERS — none of them satisfied team membership. Set SOP_DEBUG=1 to see per-probe HTTP codes."
exit1
fi
echo"::notice::sop-tier-check passed: $TIER, approver in {$ELIGIBLE}"
Automated promotion of \`staging\` (\`${TARGET_SHA:0:8}\`) to \`main\`. All required staging gates green at this SHA: CI, E2E Staging Canvas, E2E API Smoke, CodeQL.
This PR is auto-generated by \`.github/workflows/auto-promote-staging.yml\` whenever every required gate completes green on the same staging SHA. It exists because main's branch protection requires status checks "set by the expected GitHub apps" — direct \`git push\` from a workflow can't satisfy that, only PR merges through the queue can.
Merge queue lands this; no human action needed unless gates fail. Reverse-direction sync (the merge commit on main → staging) is handled by \`auto-sync-main-to-staging.yml\`.
if [ -n "$MERGED" ] && [ "$MERGED" != "null" ]; then
echo "::notice::Promote PR#${PR_NUM} merged at ${MERGED}"
break
fi
if [ "$STATE" = "CLOSED" ]; then
echo "::warning::Promote PR#${PR_NUM} was closed without merging — skipping deploy dispatch."
exit 0
fi
sleep 30
done
if [ -z "$MERGED" ] || [ "$MERGED" = "null" ]; then
echo "::warning::Promote PR#${PR_NUM} didn't merge within 30min — skipping deploy dispatch (manually run \`gh workflow run publish-workspace-server-image.yml --ref main\` once it lands)."
exit 0
fi
# Dispatch publish on main using the App token. App-initiated
# workflow_dispatch DOES propagate the workflow_run cascade,
# unlike GITHUB_TOKEN-initiated dispatch.
# publish completes → canary-verify chains via workflow_run →
# redeploy-tenants-on-main chains via workflow_run + branches:[main].
if gh workflow run publish-workspace-server-image.yml \
--repo "$REPO" --ref main 2>&1; then
echo "::notice::Dispatched publish-workspace-server-image on ref=main as molecule-ai App — canary-verify and redeploy-tenants-on-main will chain via workflow_run."
{
echo "## 🚀 Tenant redeploy chain dispatched"
echo
echo "- publish-workspace-server-image (workflow_dispatch on \`main\`, actor: \`molecule-ai[bot]\`)"
echo "- canary-verify will chain on completion"
echo "- redeploy-tenants-on-main will chain on canary green"
}>> "$GITHUB_STEP_SUMMARY"
else
echo "::error::Failed to dispatch publish-workspace-server-image. Run manually: gh workflow run publish-workspace-server-image.yml --ref main"
fi
# ALSO dispatch auto-sync-main-to-staging.yml. Same root cause as
# publish above (issue #2357): the merge-queue-initiated push to
# main is by GITHUB_TOKEN → no `on: push` triggers fire downstream.
# Without this dispatch, every staging→main promote leaves staging
# one merge commit BEHIND main, which silently dead-locks the NEXT
# promote PR as `mergeStateStatus: BEHIND` because main's
# branch-protection has `strict: true`. Verified empirically on
# 2026-05-02 against PR #2442 (Phase 2 promote): only the explicit
# publish-workspace-server-image dispatch fired on the previous
# promote SHA 76c604fb, while auto-sync silently no-op'd, leaving
# staging behind for ~24h until manually bridged.
if gh workflow run auto-sync-main-to-staging.yml \
--repo "$REPO" --ref main 2>&1; then
echo "::notice::Dispatched auto-sync-main-to-staging on ref=main as molecule-ai App — staging will absorb the new main merge commit via PR + merge queue."
else
echo "::error::Failed to dispatch auto-sync-main-to-staging. Run manually: gh workflow run auto-sync-main-to-staging.yml --ref main"
# Find existing PR for this branch (idempotent on workflow
# restart) before creating a new one.
PR_NUM=$(gh pr list --head "$BRANCH" --base staging --state open --json number --jq '.[0].number // ""')
if [ -z "$PR_NUM" ]; then
# Body lives in a temp file to keep the multi-line content
# out of the YAML block scalar (un-indented newlines inside
# an inline shell string break YAML parsing).
BODY_FILE=$(mktemp)
if [ "$DID_FF" = "true" ]; then
TITLE="chore: sync main → staging (auto, ff to ${MAIN_SHORT})"
cat > "$BODY_FILE" <<EOFBODY
Automated fast-forward of \`staging\` to \`origin/main\` (\`${MAIN_SHORT}\`). Staging has no in-flight commits that diverge from main. Merge queue lands this; no human action needed.
This PR is auto-generated by \`.github/workflows/auto-sync-main-to-staging.yml\` on every push to \`main\`. It exists because this repo's \`staging\` branch has a \`merge_queue\` ruleset that blocks direct pushes — even from the GitHub Actions integration.
EOFBODY
else
TITLE="chore: sync main → staging (auto, merge ${MAIN_SHORT})"
cat > "$BODY_FILE" <<EOFBODY
Automated merge of \`origin/main\` (\`${MAIN_SHORT}\`) into \`staging\`. Staging has commits main doesn't, so this is a non-ff merge that absorbs main's tip. Merge queue lands this.
This PR is auto-generated by \`.github/workflows/auto-sync-main-to-staging.yml\` on every push to \`main\`.
EOFBODY
fi
# gh pr create prints the URL on stdout; extract the PR number.
gh pr comment "$PR" -R "$REPO" --body "🔒 Auto-merge disabled — new commit (\`${NEW_SHA:0:7}\`) pushed after auto-merge was enabled. The merge queue locks SHAs at entry, so subsequent pushes can race. Verify the new commit and re-enable with \`gh pr merge --auto\`."
- name:Gitea no-op
if:steps.host.outputs.is_gitea == 'true'
run:echo "Gitea Actions — auto-merge gating not applicable; no-op (job intentionally green so branch protection's required-check name lands SUCCESS)."
# after the sweep-cf-orphans soft-skip incident — same class
# of bug):
#
# The earlier "skipping cascade. templates will pick up the
# new version on their own next rebuild" message was wrong —
# templates only build on this dispatch trigger; without it
# they stay pinned to whatever runtime version they last saw.
# A silent skip here means "PyPI is current, templates are
# not" and the gap is invisible until someone notices a
# template still on the old version weeks later.
#
# - push → exit 1 (red CI surfaces the gap)
# - workflow_dispatch → exit 0 with a warning (operator
# ran this ad-hoc; let them rerun
# after fixing the secret)
# Soft-skip on workflow_dispatch when the token is missing
# (operator ad-hoc test); hard-fail on push so unattended
# publishes can't silently skip the cascade. Same shape as
# the original v1, intentional split per the schedule-vs-
# dispatch hardening 2026-04-28.
if [ -z "$DISPATCH_TOKEN" ]; then
if [ "${{ github.event_name }}" = "workflow_dispatch" ]; then
echo "::warning::TEMPLATE_DISPATCH_TOKEN secret not set — skipping cascade."
echo "::warning::DISPATCH_TOKEN secret not set — skipping cascade."
echo "::warning::set it at Settings → Secrets and Variables → Actions, then rerun. Templates will stay on the prior runtime version until either this token is set or each template is rebuilt manually."
exit 0
fi
echo "::error::TEMPLATE_DISPATCH_TOKEN secret missing — cascade cannot fan out."
echo "::error::DISPATCH_TOKEN secret missing — cascade cannot fan out."
echo "::error::PyPI was published, but the 8 template repos will NOT pick up the new version until this token is restored and a republish dispatches the cascade."
echo "::error::set it at Settings → Secrets and Variables → Actions; then re-trigger publish-runtime via workflow_dispatch."
exit 1
@ -327,37 +318,119 @@ jobs:
echo "::error::publish job did not expose a version output — cascade cannot fan out"
exit 1
fi
# All 9 active workspace template repos. The PR #2536 pruning
# ("deprecated, no shipping images") was empirically wrong:
# continuous-synth-e2e.yml defaults to langgraph as its primary
# canary (line 44), and every excluded template had successful
# publish-image runs as of 2026-05-03 — none were dormant.
# Symptom of the prune: today's a2a-sdk strict-mode fix
# (#2566 / commit e1628c4) cascaded to 4 templates but never
# reached langgraph, so the synth-E2E correctly canary'd a fix
# that had landed but not deployed. Re-added the 5 templates.
# Long-term: derive this list from manifest.json so cascade
# scope can't drift from E2E scope — tracked in RFC #388 as a
# Phase-1 invariant.
# All 9 workspace templates declared in manifest.json. The list
# MUST stay aligned with manifest.json's workspace_templates —
# cascade-list-drift-gate.yml enforces this in CI per the
# codex-stuck-on-stale-runtime invariant from PR #2556.
# Long-term goal: derive this list from manifest.json so it
# can't drift even on a manifest edit (RFC #388 Phase-1).
#
# Per-template publish-image.yml presence is checked at
# cascade-time below: codex doesn't ship one today, so the
# cascade soft-skips it with an informational message rather
# than dropping it from this list (which would re-introduce
# operators can retry the failed templates manually.
echo "::error::Cascade incomplete after 3 retries each. Failed templates:$FAILED"
echo "::error::PyPI publish succeeded; failed templates lag the new version. Re-run this workflow_dispatch with the same version to retry only the laggers (idempotent — already-cascaded templates skip)."
exit 1
fi
if [ -n "$SKIPPED" ]; then
echo "Cascade complete: pinned $VERSION on cascade-active templates. Soft-skipped (no publish-image.yml):$SKIPPED"
else
echo "Cascade complete: $VERSION pinned across all manifest workspace_templates."
# Specifically match the 422 duplicate-base/head error so
# any OTHER PATCH failure (auth, deleted PR, etc.) still
# surfaces as a real workflow failure.
if echo "$PATCH_OUTPUT" | grep -q "pull request already exists for base branch 'staging'"; then
echo "::notice::PR#${PR_NUMBER}: duplicate target-staging PR exists on same head — closing this main-PR as redundant."
gh pr close "$PR_NUMBER" \
--repo "${{ github.repository }}" \
--comment "[retarget-bot] Closing — another PR on the same head branch already targets \`staging\`. This PR is redundant. See issue#1884 for the rationale."
echo "::error::Retarget PATCH failed and was NOT a duplicate-base error:"
echo "$PATCH_OUTPUT" >&2
exit 1
- name:Post explainer comment
if:steps.retarget.outputs.outcome == 'retargeted'
env:
GH_TOKEN:${{ secrets.GITHUB_TOKEN }}
PR_NUMBER:${{ github.event.pull_request.number }}
run:|
gh pr comment "$PR_NUMBER" \
--repo "${{ github.repository }}" \
--body "$(cat <<'BODY'
[retarget-bot] This PR was opened against `main` and has been retargeted to `staging` automatically.
**Why:**per [SHARED_RULES rule 8](https://github.com/molecule-ai/molecule-ai-org-template-molecule-dev/blob/main/SHARED_RULES.md), all feature work targets `staging` first; the CEO promotes `staging → main` separately.
**Whatchanged:** just the base branch — no code change. CI will re-run against `staging`. If you get merge conflicts, rebase on `staging`.
**Ifthis PR is the CEO's staging→main promotion:** the Action skipped you (only bot-authored PRs are retargeted). If you see this comment on your CEO PR, that's a bug — please tag @HongmingWang-Rabbit.
@ -57,7 +57,7 @@ See `CLAUDE.md` for a full list of environment variables and their purposes.
This repo is scoped to **code** (canvas, workspace, workspace-server, related
infra). Public content (blog posts, marketing copy, OG images, SEO briefs,
DevRel demos) lives in [`Molecule-AI/docs`](https://github.com/Molecule-AI/docs).
DevRel demos) lives in [`Molecule-AI/docs`](https://git.moleculesai.app/molecule-ai/docs).
The `Block forbidden paths` CI gate fails any PR that writes to `marketing/`
or other removed paths — open against `Molecule-AI/docs` instead.
@ -110,7 +110,7 @@ causing a render loop when any node position changed.
1. **Repo-wide:** "Automatically delete head branches" is on. Once a PR merges, the branch is deleted server-side. Any subsequent `git push` to that branch fails with `remote rejected — no such branch`.
2. **CI:** the `pr-guards` workflow (calling [molecule-ci `disable-auto-merge-on-push`](https://github.com/Molecule-AI/molecule-ci/blob/main/.github/workflows/disable-auto-merge-on-push.yml)) fires on every push to an open PR. If auto-merge was already enabled, it's disabled and a comment is posted. You must explicitly re-enable after verifying the new commit.
2. **CI:** the `pr-guards` workflow (calling [molecule-ci `disable-auto-merge-on-push`](https://git.moleculesai.app/molecule-ai/molecule-ci/src/branch/main/.github/workflows/disable-auto-merge-on-push.yml)) fires on every push to an open PR. If auto-merge was already enabled, it's disabled and a comment is posted. You must explicitly re-enable after verifying the new commit.
**Workflow rules that follow from the guards:**
- Push **all** commits before running `gh pr merge --auto`.
@ -180,9 +180,9 @@ and run CI manually.
Code in this repo lands in molecule-core. Some related runtime artifacts
live in their own repos:
- [`Molecule-AI/molecule-ai-workspace-runtime`](https://github.com/Molecule-AI/molecule-ai-workspace-runtime) — Python adapter SDK (`molecule_runtime`) that runs inside containerized Molecule workspaces. Bridges Claude Code SDK / hermes / langgraph / etc. → A2A queue.
- [`Molecule-AI/molecule-sdk-python`](https://github.com/Molecule-AI/molecule-sdk-python) — `A2AServer` + `RemoteAgentClient` for external agents that register over the public `/registry/register` flow.
- [`Molecule-AI/molecule-mcp-claude-channel`](https://github.com/Molecule-AI/molecule-mcp-claude-channel) — Claude Code channel plugin. Bridges A2A traffic into a running Claude Code session via MCP `notifications/claude/channel`. Polling-based (no tunnel required); install with `claude --channels plugin:molecule@Molecule-AI/molecule-mcp-claude-channel`.
- [`Molecule-AI/molecule-ai-workspace-runtime`](https://git.moleculesai.app/molecule-ai/molecule-ai-workspace-runtime) — Python adapter SDK (`molecule_runtime`) that runs inside containerized Molecule workspaces. Bridges Claude Code SDK / hermes / langgraph / etc. → A2A queue.
- [`Molecule-AI/molecule-sdk-python`](https://git.moleculesai.app/molecule-ai/molecule-sdk-python) — `A2AServer` + `RemoteAgentClient` for external agents that register over the public `/registry/register` flow.
- [`Molecule-AI/molecule-mcp-claude-channel`](https://git.moleculesai.app/molecule-ai/molecule-mcp-claude-channel) — Claude Code channel plugin. Bridges A2A traffic into a running Claude Code session via MCP `notifications/claude/channel`. Polling-based (no tunnel required); install with `claude --channels plugin:molecule@Molecule-AI/molecule-mcp-claude-channel`.
When extending the **A2A surface** in molecule-core (`workspace-server/internal/handlers/a2a_proxy.go` etc.), consider whether the change has a downstream impact on the runtime SDK or the channel plugin — they're versioned independently but share the wire shape.
[](https://railway.app/new/template?template=https://github.com/Molecule-AI/molecule-monorepo)
[](https://render.com/deploy?repo=https://github.com/Molecule-AI/molecule-monorepo)
[](https://railway.app/new/template?template=https://git.moleculesai.app/molecule-ai/molecule-core)
[](https://render.com/deploy?repo=https://git.moleculesai.app/molecule-ai/molecule-core)
</div>
@ -53,8 +53,8 @@ Molecule AI is the most powerful way to govern an AI agent organization in produ
It combines the parts that are usually scattered across demos, internal glue code, and framework-specific tooling into one product:
- one org-native control plane for teams, roles, hierarchy, and lifecycle
- one runtime layer that lets LangGraph, DeepAgents, Claude Code, CrewAI, AutoGen, and OpenClaw run side by side
- one memory model that keeps recall, sharing, and skill evolution aligned with organizational boundaries
- one runtime layer that lets **eight** agent runtimes — LangGraph, DeepAgents, Claude Code, CrewAI, AutoGen, **Hermes**, **Gemini CLI**, and OpenClaw — run side by side behind one workspace contract
- one memory model that keeps recall, sharing, and skill evolution aligned with organizational boundaries (Memory v2 backed by pgvector for semantic recall)
- one operational surface for observing, pausing, restarting, inspecting, and improving live workspaces
Most teams can build a workflow, a strong single agent, a coding agent, or a custom multi-agent graph.
@ -75,7 +75,7 @@ You do not wire collaboration paths by hand. Hierarchy defines the default commu
### 3. Runtime choice stops being a dead-end decision
LangGraph, DeepAgents, Claude Code, CrewAI, AutoGen, and OpenClaw can all plug into the same workspace abstraction. Teams can standardize governance without forcing every group onto one runtime.
LangGraph, DeepAgents, Claude Code, CrewAI, AutoGen, Hermes, Gemini CLI, and OpenClaw can all plug into the same workspace abstraction. Teams can standardize governance without forcing every group onto one runtime.
### 4. Memory is treated like infrastructure
@ -117,6 +117,8 @@ Molecule AI is not trying to replace the frameworks below. It is the system that
| **Claude Code** | Shipping on `main` | Real coding workflows, CLI-native continuity | Secure workspace abstraction, A2A delegation, org boundaries, shared control plane |
| **CrewAI** | Shipping on `main` | Role-based crews | Persistent workspace identity, policy consistency, shared canvas and registry |
| **OpenClaw** | Shipping on `main` | CLI-native runtime with its own session model | Workspace lifecycle, templates, activity logs, topology-aware collaboration |
| **NemoClaw** | WIP on `feat/nemoclaw-t4-docker` | NVIDIA-oriented runtime path | Planned to join the same abstraction once merged; not yet part of `main` |
@ -182,9 +184,10 @@ The result is not just “an agent that learns.” It is **an organization that
## What Ships In `main`
### Canvas
### Canvas (v4)
- Next.js 15 + React Flow + Zustand
- **warm-paper theme system** — light / dark / follow-system, SSR cookie + nonce'd boot script + ThemeProvider; terminal + code surfaces stay dark unconditionally
- drag-to-nest team building
- empty-state deployment + onboarding wizard
- template palette
@ -193,8 +196,9 @@ The result is not just “an agent that learns.” It is **an organization that
### Platform
- Go/Gin control plane
- workspace CRUD and provisioning
- Go 1.25 / Gin control plane (80+ HTTP endpoints + Gorilla WebSocket fanout)
- workspace CRUD and provisioning (pluggable Provisioner — Docker locally, EC2 + SSM in production)
- **A2A response path is a typed discriminated union (RFC #2967)** — frozen dataclasses + total parser; 100% unit + adversarial fuzz coverage
- registry and heartbeats
- browser-safe A2A proxy
- team expansion/collapse
@ -204,10 +208,10 @@ The result is not just “an agent that learns.” It is **an organization that
### Runtime
- unified `workspace/` image
- adapter-driven execution
- unified `workspace/` image; thin AMI in production (us-east-2)
- `tenant_resources` audit table + 30-min boot-event-aware reconciler — every CF / AWS lifecycle event recorded, claim vs live state diffed
### Bring your own Claude Code session (via [`molecule-mcp-claude-channel`](https://git.moleculesai.app/molecule-ai/molecule-mcp-claude-channel))
- Claude Code plugin that bridges Molecule A2A traffic into a local Claude Code session via MCP
- subscribe to one or more workspaces; peer messages surface as conversation turns; replies route back through Molecule's A2A
- no tunnel, no public endpoint — the plugin self-registers each watched workspace as `delivery_mode=poll` and long-polls `/activity?since_id=…`
- multi-tenant friendly: one plugin install can watch workspaces across multiple Molecule tenants (`MOLECULE_PLATFORM_URLS` per-workspace)
- install via the standard marketplace flow: `/plugin marketplace add Molecule-AI/molecule-mcp-claude-channel` → `/plugin install molecule-channel@molecule-mcp-claude-channel`
## Built For Teams That Need More Than A Demo
Molecule AI is especially strong when you need to run:
@ -233,24 +252,30 @@ Molecule AI is especially strong when you need to run:
# Defaults boot the stack locally out of the box. See .env.example for
@ -303,7 +328,11 @@ Then open `http://localhost:3000`:
## Current Scope
The current `main` branch already includes the core platform, canvas, memory model, six production adapters, skill lifecycle, and operational surfaces. Adjacent runtime work such as **NemoClaw** remains branch-level until merged, and this README keeps that distinction explicit on purpose.
The current `main` branch ships the core platform, Canvas v4 (warm-paper themed), Memory v2 (pgvector semantic recall), the typed-SSOT A2A response path (RFC #2967), **eight production adapters** (Claude Code, Hermes, Gemini CLI, LangGraph, DeepAgents, CrewAI, AutoGen, OpenClaw), skill lifecycle, and operational surfaces.
The companion private repo [`molecule-controlplane`](https://git.moleculesai.app/molecule-ai/molecule-controlplane) provides the SaaS surface — multi-tenant orchestration on EC2 + Neon + Cloudflare Tunnels, KMS envelope encryption, WorkOS auth, Stripe billing, and a `tenant_resources` audit table with a 30-min reconciler.
Adjacent runtime work such as **NemoClaw** remains branch-level until merged, and this README keeps that distinction explicit on purpose.
[](https://railway.app/new/template?template=https://github.com/Molecule-AI/molecule-core)
[](https://render.com/deploy?repo=https://github.com/Molecule-AI/molecule-core)
[](https://railway.app/new/template?template=https://git.moleculesai.app/molecule-ai/molecule-core)
[](https://render.com/deploy?repo=https://git.moleculesai.app/molecule-ai/molecule-core)
</div>
@ -52,8 +52,8 @@ Molecule AI 是目前最强的 AI Agent 组织治理方案之一,用来把 age
# ADR-002: Local-build mode signalled by `MOLECULE_IMAGE_REGISTRY` presence
* Status: Accepted (2026-05-07)
* Issue: #63 (closes Task #194)
* Decision: Hongming (CTO) + Claude Opus 4.7 (implementation)
## Context
Pre-2026-05-06, every Molecule deployment — both production tenants and OSS contributor laptops — pulled workspace-template-* container images from `ghcr.io/molecule-ai/`. Production tenants additionally set `MOLECULE_IMAGE_REGISTRY` to an AWS ECR mirror via Railway env / EC2 user-data, but the OSS default was the upstream GHCR org.
On 2026-05-06 the `Molecule-AI` GitHub org was suspended (saved memory: `feedback_github_botring_fingerprint`). GHCR now returns **403 Forbidden** for every `molecule-ai/workspace-template-*` manifest. OSS contributors who clone `molecule-core` and run `go run ./workspace-server/cmd/server` cannot provision a workspace — every first provision fails with:
```
docker image "ghcr.io/molecule-ai/workspace-template-claude-code:latest" not found after pull attempt
```
Production tenants are unaffected (their `MOLECULE_IMAGE_REGISTRY` points at ECR, which we still control), but OSS onboarding is broken. Workspace template repos are intentionally separate from `molecule-core` (each runtime is OSS-shape and forkable), and they are mirrored to Gitea (`https://git.moleculesai.app/molecule-ai/molecule-ai-workspace-template-<runtime>`) — but the provisioner has no path that consumes Gitea source directly.
## Decision
When `MOLECULE_IMAGE_REGISTRY` is **unset** (or empty), the provisioner switches to a **local-build mode** that:
1. Looks up the workspace-template repo's HEAD sha on Gitea via a single API call.
2. Checks whether a SHA-pinned local image (`molecule-local/workspace-template-<runtime>:<sha12>`) already exists; if so, reuses it.
3. Otherwise shallow-clones the repo into `~/.cache/molecule/workspace-template-build/<runtime>/<sha12>/` and runs `docker build --platform=linux/amd64 -t <tag> .`.
4. Hands the SHA-pinned tag to Docker for ContainerCreate, bypassing the registry-pull path entirely.
When `MOLECULE_IMAGE_REGISTRY` is **set**, behavior is unchanged: pull the image from that registry. Existing prod tenants and self-hosters who mirror to a private registry are not affected.
## Consequences
### Positive
* **Zero-config OSS onboarding** — `git clone molecule-core && go run ./workspace-server/cmd/server` boots end-to-end without any registry credentials.
* **Production tenants protected** — same env var, same semantics in SaaS-mode. Migration is a no-op.
* **No new env var** — extending an existing var's semantics ("where to pull, OR build locally if absent") rather than introducing `MOLECULE_LOCAL_BUILD=1` keeps the surface small.
* **SHA-pinned cache** — repeat builds are O(API-call); only template-repo HEAD changes invalidate.
* **Production-parity image** — amd64 emulation on Apple Silicon honours `feedback_local_must_mimic_production`. The provisioner's existing `defaultImagePlatform()` already forces amd64 for parity; building amd64 locally lets that decision stay consistent.
### Negative
* **Conflates two concerns** — `MOLECULE_IMAGE_REGISTRY` now signals BOTH "where to pull" AND "build locally if absent." A future operator who unsets it expecting a hard error will instead get a slow first-provision. Documented in the runbook.
* **First-provision is slow on Apple Silicon** — 5–10 min via QEMU emulation on the cold path. Mitigated by SHA-cache (subsequent runs are <1slookup+0sbuild).
* **Coverage gap** — only 4 of 9 runtimes are mirrored to Gitea today (`claude-code`, `hermes`, `langgraph`, `autogen`). The other 5 fail with an actionable "not mirrored" error. Mirroring those repos is a separate task.
* **Implicit trust boundary** — operator running `go run` implicitly trusts `molecule-ai/molecule-ai-workspace-template-*` repos on Gitea. This is the same trust they would extend to the GHCR images today; not a new attack surface.
## Alternatives considered
1. **New env var `MOLECULE_LOCAL_BUILD=1`** — explicit, but requires OSS contributors to know it exists. Violates the zero-config goal.
2. **Push pre-built images to a Gitea container registry, mirror tag from upstream** — operationally cleaner but: (a) Gitea's container-registry add-on isn't deployed on the operator host, (b) defeats the OSS-contributor goal of "hack on the source, see your changes," since they'd still pull a stale image.
3. **Embed Dockerfiles in molecule-core itself, drop the standalone template repos** — would work but breaks the OSS-shape principle; templates are intentionally separable, anyone-can-fork artifacts.
4. **Build native arch on Apple Silicon (arm64) and drop the platform pin in local-mode** — fast, but creates `linux/arm64` images that diverge from the amd64-only prod runtime. Local-vs-prod debug behavior would diverge. Rejected per `feedback_local_must_mimic_production`.
## Security review
* **Gitea repo URL allowlist** — runtime name must be in the `knownRuntimes` allowlist (defence-in-depth against a future code path that lets cfg.Runtime carry untrusted input). Repo prefix is hardcoded to `https://git.moleculesai.app/molecule-ai/molecule-ai-workspace-template-`; forks can override via `MOLECULE_LOCAL_TEMPLATE_REPO_PREFIX` (opt-in, default off).
* **Token handling** — clones are anonymous over HTTPS by default (templates are public). `MOLECULE_GITEA_TOKEN`, if set, is passed via URL userinfo for the clone and as `Authorization: token` for the API call. The token is **masked in every log line** via `maskTokenInURL` / `maskTokenInString` and never appears in the cache dir path.
* **No silent fallback** — if Gitea is unreachable or the runtime isn't mirrored, we return a clear error mentioning the repo URL and the missing runtime. We **never** fall back to GHCR/ECR (that would be a confusing bug for an OSS contributor who happened to have stale ECR creds in their docker config).
* **Build-arg injection** — `docker build` is invoked with NO `--build-arg` from external input. Dockerfile is consumed as-is.
* **Cache poisoning** — cache key is the Gitea HEAD sha + Dockerfile content; a force-push to the template repo's main branch regenerates the key on next run. Cache dir is per-user (`$HOME/.cache`), so cross-user attacks aren't relevant in single-user dev mode.
## Versioning + back-compat
* Existing prod tenants set `MOLECULE_IMAGE_REGISTRY=<ECR url>` → unchanged behavior.
* Existing local installs that set the var → unchanged behavior.
* Existing local installs that don't set it → switch to local-build path. Migration: none required (additive); first provision will take 5–10 min instead of failing.
**Status:** living document — update when you ship a feature that touches one backend.
**Owner:** workspace-server + controlplane teams.
**Last audit:** 2026-05-05 (Claude agent — `provisionWorkspaceAuto` / `StopWorkspaceAuto` / `HasProvisioner` SoT pattern landed in PRs #2811 + #2824).
**Last audit:** 2026-05-07 (plugin install/uninstall closed for EC2 backend via EIC SSH push to the bind-mounted `/configs/plugins/<name>/`, mirroring the Files API PR #1702 pattern).
## Why this exists
@ -54,7 +54,7 @@ For "do we have any backend?", use `HasProvisioner()`, never bare `h.provisioner
| **Files API** | | | | |
| List / Read / Write / Replace / Delete | `container_files.go`, `template_import.go` | `docker exec` + tar `CopyToContainer` | SSH via EIC tunnel (PR #1702) | ✅ parity as of 2026-04-22 (previously docker-only) |
| **Plugins** | | | | |
| Install / uninstall / list | `plugins_install.go`| `deliverToContainer()` + volume rm | **gap — no live plugin delivery** | 🔴 **docker-only** |
| Install / uninstall / list | `plugins_install.go`+ `plugins_install_eic.go` | `deliverToContainer()` → exec+`CopyToContainer` on local container | `instance_id` set → EIC SSH push of the staged tarball into the EC2's bind-mounted `/configs/plugins/<name>/` (per `workspaceFilePathPrefix`), `chown 1000:1000`, restart | ✅ parity |
@ -4,7 +4,7 @@ How a workspace-server code change reaches the prod tenant fleet — and how to
> **⚠️ State note (2026-04-22):** this doc describes the **intended design**. As of this write, the canary fleet described below is **not actually running** — no canary tenants are provisioned, `CANARY_TENANT_URLS` / `CANARY_ADMIN_TOKENS` / `CANARY_CP_SHARED_SECRET` are empty in repo secrets, and `canary-verify.yml` fails every run.
>
> Current merges gate on manual `promote-latest.yml` dispatches, not canary. See [molecule-controlplane/docs/canary-tenants.md](https://github.com/Molecule-AI/molecule-controlplane/blob/main/docs/canary-tenants.md) for the Phase 1 code work that's already shipped + the Phase 2 plan for actually standing up the fleet + a "should we even do this now?" decision framework.
> Current merges gate on manual `promote-latest.yml` dispatches, not canary. See [molecule-controlplane/docs/canary-tenants.md](https://git.moleculesai.app/molecule-ai/molecule-controlplane/src/branch/main/docs/canary-tenants.md) for the Phase 1 code work that's already shipped + the Phase 2 plan for actually standing up the fleet + a "should we even do this now?" decision framework.
>
> **Account-specific identifiers (AWS account ID, IAM role name) referenced below in the original design have been redacted from this public doc.** The actual values — if they exist — are in `Molecule-AI/internal/runbooks/canary-fleet.md`. If you're implementing Phase 2, start there.
Your infrastructure choice just got decoupled from your agent platform choice. Molecule AI now ships three production-ready workspace backends — `docker`, `flyio`, and `controlplane` — and switching between them takes a single environment variable. Your agent code, model choices, and workspace topology stay exactly the same.
This post covers what shipped in [PR #501](https://github.com/Molecule-AI/molecule-core/pull/501) (Fly Machines provisioner) and [PR #503](https://github.com/Molecule-AI/molecule-core/pull/503) (control plane provisioner), and which backend fits your situation.
This post covers what shipped in [PR #501](https://git.moleculesai.app/molecule-ai/molecule-core/pull/501) (Fly Machines provisioner) and [PR #503](https://git.moleculesai.app/molecule-ai/molecule-core/pull/503) (control plane provisioner), and which backend fits your situation.
## Before: One Deployment Model for Every Use Case
@ -107,4 +107,4 @@ No changes to agent code, tool definitions, or orchestration logic. Swap `CONTAI
---
*[PR #501](https://github.com/Molecule-AI/molecule-core/pull/501) (Fly Machines provisioner) and [PR #503](https://github.com/Molecule-AI/molecule-core/pull/503) (control plane provisioner) are both merged to `main`. Molecule AI is open source — contributions welcome.*
*[PR #501](https://git.moleculesai.app/molecule-ai/molecule-core/pull/501) (Fly Machines provisioner) and [PR #503](https://git.moleculesai.app/molecule-ai/molecule-core/pull/503) (control plane provisioner) are both merged to `main`. Molecule AI is open source — contributions welcome.*
@ -299,8 +299,8 @@ Or use the Canvas UI: Workspace → Config → MCP Servers → Add browser MCP s
**Try it free** — Molecule AI is open source and self-hostable. Get a workspace running in under 5 minutes.
→ [Get started on GitHub →](https://github.com/Molecule-AI/molecule-core)
→ [Get started on GitHub →](https://git.moleculesai.app/molecule-ai/molecule-core)
---
*Have a browser automation use case you want to see covered? Open a discussion on [GitHub Discussions](https://github.com/Molecule-AI/molecule-core/discussions) — or file an issue with the `enhancement` label.*
*Have a browser automation use case you want to see covered? File an issue with the `enhancement` label on the [molecule-core issue tracker](https://git.moleculesai.app/molecule-ai/molecule-core/issues).*
@ -87,7 +87,7 @@ The proxy is **fail-closed**: only an explicit allowlist of paths (`/cp/auth/`,
This is also the structural fix for the lateral-movement risk that session auth introduced: without the allowlist, a tenant-authed browser user could have proxied `/cp/admin/*` requests upstream and exploited the fact that those endpoints accept WorkOS session cookies. The allowlist makes that impossible by construction.
@ -99,7 +99,7 @@ The waitlist itself is a Canvas-administered list with email hashing in audit lo
This is the operational surface that makes the above security work matter: the beta is invitation-only, credentials are scoped, and every admin action is auditable.
→ Control plane PRs [#145](https://github.com/Molecule-AI/molecule-controlplane/pull/145), [#148](https://github.com/Molecule-AI/molecule-controlplane/pull/148), [#150](https://github.com/Molecule-AI/molecule-controlplane/pull/150)
→ Control plane PRs [#145](https://git.moleculesai.app/molecule-ai/molecule-controlplane/pull/145), [#148](https://git.moleculesai.app/molecule-ai/molecule-controlplane/pull/148), [#150](https://git.moleculesai.app/molecule-ai/molecule-controlplane/pull/150)
@ -12,7 +12,7 @@ Your team is in Discord. Your AI agents are in Molecule AI. Until today, those t
That's now one webhook URL.
Molecule AI workspaces can now connect to Discord. Here's what shipped in [PR #656](https://github.com/Molecule-AI/molecule-core/pull/656).
Molecule AI workspaces can now connect to Discord. Here's what shipped in [PR #656](https://git.moleculesai.app/molecule-ai/molecule-core/pull/656).
---
@ -70,7 +70,7 @@ For inbound slash commands, point your Discord app's **Interactions Endpoint URL
## Security: Webhook Tokens Don't Appear in Logs
Webhook URLs contain a token (`/webhooks/{id}/{token}`). If that token leaks into server logs, it's a rotation event. The Discord adapter is explicit about this: HTTP request errors are logged without the URL, and the adapter returns a generic error message. This was hardened in [PR #659](https://github.com/Molecule-AI/molecule-core/pull/659).
Webhook URLs contain a token (`/webhooks/{id}/{token}`). If that token leaks into server logs, it's a rotation event. The Discord adapter is explicit about this: HTTP request errors are logged without the URL, and the adapter returns a generic error message. This was hardened in [PR #659](https://git.moleculesai.app/molecule-ai/molecule-core/pull/659).
*Discord adapter shipped in [PR #656](https://github.com/Molecule-AI/molecule-core/pull/656). Security hardening in [PR #659](https://github.com/Molecule-AI/molecule-core/pull/659). Molecule AI is open source — contributions welcome.*
*Discord adapter shipped in [PR #656](https://git.moleculesai.app/molecule-ai/molecule-core/pull/656). Security hardening in [PR #659](https://git.moleculesai.app/molecule-ai/molecule-core/pull/659). Molecule AI is open source — contributions welcome.*
@ -133,4 +133,4 @@ With protocol-native A2A, you get:
Molecule AI's external agent registration is production-ready. Documentation is live at [External Agent Registration Guide](https://docs.molecule.ai/docs/guides/external-agent-registration). The npm package for the MCP server is available at [`@molecule-ai/mcp-server`](https://www.npmjs.com/package/@molecule-ai/mcp-server).
Read the full [A2A v1.0 protocol spec](https://github.com/Molecule-AI/molecule-core/blob/main/docs/api-protocol/a2a-protocol.md) on GitHub.
Read the full [A2A v1.0 protocol spec](https://git.moleculesai.app/molecule-ai/molecule-core/src/branch/main/docs/api-protocol/a2a-protocol.md) on GitHub.
@ -170,4 +170,4 @@ The `staging` branch is now on `a2a-sdk` 1.0.0. The `main` branch still carries
If you're running `a2a-sdk` 0.3.x and planning the 1.0.0 migration, this post is the reference. The four breaking changes are well-contained, the migration is a single PR, and the eight smoke scenarios above will tell you whether the upgrade is clean before you merge.
Questions? The [A2A protocol spec](https://github.com/google-a2a/a2a-specification) is the authoritative source. For Molecule AI's production A2A implementation, see [External Agent Registration](https://docs.molecule.ai/docs/guides/external-agent-registration) or open an issue in the [molecule-core](https://github.com/Molecule-AI/molecule-core) repo.
Questions? The [A2A protocol spec](https://github.com/google-a2a/a2a-specification) is the authoritative source. For Molecule AI's production A2A implementation, see [External Agent Registration](https://docs.molecule.ai/docs/guides/external-agent-registration) or open an issue in the [molecule-core](https://git.moleculesai.app/molecule-ai/molecule-core) repo.
OSS contributors who run `molecule-core` locally do **not** need to authenticate to GHCR or AWS ECR. When the `MOLECULE_IMAGE_REGISTRY` env var is **unset**, the platform automatically:
1. Looks up the HEAD sha of `https://git.moleculesai.app/molecule-ai/molecule-ai-workspace-template-<runtime>` (single API call, no clone).
2. If a local image tagged `molecule-local/workspace-template-<runtime>:<sha12>` already exists, reuses it (cache hit).
3. Otherwise, shallow-clones the repo into `~/.cache/molecule/workspace-template-build/<runtime>/<sha12>/` and runs `docker build --platform=linux/amd64 -t <tag> .`.
4. Hands the SHA-pinned tag to Docker for `ContainerCreate`.
**First-provision build time:** 5–10 min on Apple Silicon (amd64 emulation). Subsequent provisions hit the cache and start in seconds. Cache is invalidated automatically when the template repo's HEAD moves.
**Currently mirrored on Gitea:** `claude-code`, `hermes`, `langgraph`, `autogen`. Other runtimes (`crewai`, `deepagents`, `codex`, `gemini-cli`, `openclaw`) fail with an actionable "not mirrored to Gitea" error pointing at the missing repo.
**Production tenants are unaffected** — every prod tenant sets `MOLECULE_IMAGE_REGISTRY` to its private ECR mirror via Railway env / EC2 user-data, so the SaaS pull path stays identical.
### Environment overrides
| Var | Default | Use case |
|-----|---------|----------|
| `MOLECULE_IMAGE_REGISTRY` | (unset) | Set to a real registry URL to switch from local-build to SaaS-pull mode. |
| `MOLECULE_LOCAL_TEMPLATE_REPO_PREFIX` | `https://git.moleculesai.app/molecule-ai/molecule-ai-workspace-template-` | Point at a fork. |
| `MOLECULE_GITEA_TOKEN` | (unset) | Required only if your fork has private template repos. |
### Verifying a switch from the GHCR-retag stopgap
Pre-fix, OSS contributors worked around the suspended GHCR org by manually retagging an `:latest` image. After this change, that workaround is **redundant**: simply unset `MOLECULE_IMAGE_REGISTRY` (or leave it unset), boot the platform, and provision a workspace. Logs will show:
```
Provisioner: local-build mode → using locally-built image molecule-local/workspace-template-claude-code:<sha12> for runtime claude-code
If you still see `ghcr.io/molecule-ai/...` in the boot log, double-check `env | grep MOLECULE_IMAGE_REGISTRY` — a stale shell export from the pre-fix workaround could keep SaaS-mode active.
**Severity:** High — every new SaaS tenant blocked
**Detection path:** E2E Staging SaaS run 24848425822 failed at "tenant provisioning"; investigation of CP Railway logs surfaced the auth mismatch.
**Status:** Fix pushed on [molecule-controlplane#238](https://github.com/Molecule-AI/molecule-controlplane/pull/238).
**Related:** [issue #239](https://github.com/Molecule-AI/molecule-controlplane/issues/239) (Cloudflare DNS record quota), [testing-strategy.md](../engineering/testing-strategy.md)
**Status:** Fix pushed on [molecule-controlplane#238](https://git.moleculesai.app/molecule-ai/molecule-controlplane/pull/238).
**Related:** [issue #239](https://git.moleculesai.app/molecule-ai/molecule-controlplane/issues/239) (Cloudflare DNS record quota), [testing-strategy.md](../engineering/testing-strategy.md)
## Summary
@ -35,7 +35,7 @@ The flow was:
### The commit that introduced the bug
[molecule-controlplane#235](https://github.com/Molecule-AI/molecule-controlplane/pull/235) — "fix(provision): wait for tenant boot-event before falling back to canary". Merged 2026-04-22.
[molecule-controlplane#235](https://git.moleculesai.app/molecule-ai/molecule-controlplane/pull/235) — "fix(provision): wait for tenant boot-event before falling back to canary". Merged 2026-04-22.
Before #235, readiness was determined via a canary probe through Cloudflare's edge — which didn't need CP-side auth, so the INSERT ordering didn't matter. #235 made boot-events the primary readiness signal but didn't move the INSERT earlier. The race was latent before but became load-bearing after.
@ -138,5 +138,5 @@ If you see any of these, don't try to "clean it up in place" — **cherry-pick o
## Related
- [Issue #1822](https://github.com/Molecule-AI/molecule-core/issues/1822) — backend parity drift tracker (example of docs that have to stay current)
- [Issue #1822](https://git.moleculesai.app/molecule-ai/molecule-core/issues/1822) — backend parity drift tracker (example of docs that have to stay current)
- [Postmortem: CP boot-event 401](./postmortem-2026-04-23-boot-event-401.md) — caught before shipping because a reviewer could read the diff
@ -153,7 +153,7 @@ The `id` field is your workspace ID — remember it.
|---|---|
| "Failed to send message — agent may be unreachable" | The tenant couldn't POST to your URL. Verify `curl https://<your-tunnel>/health` returns 200 from another machine. |
| Response takes > 30s | Canvas times out around 30s. Keep initial implementations simple. For long-running work, return a placeholder and use [polling mode](#next-step-polling-mode-preview) (once available). |
| Agent duplicated in chat | Known canvas bug where WebSocket + HTTP responses both render. Fixed in [PR #1517](https://github.com/Molecule-AI/molecule-core/pull/1517). |
| Agent duplicated in chat | Known canvas bug where WebSocket + HTTP responses both render. Fixed in [PR #1517](https://git.moleculesai.app/molecule-ai/molecule-core/pull/1517). |
| Agent replies but canvas shows "Agent unreachable" | Check the tenant can reach your URL. Cloudflare quick tunnels rotate — the URL in your canvas may point at a dead tunnel after restart. |
| Getting 404 when POSTing to tenant | Add `X-Molecule-Org-Id` header. The tenant's security layer 404s unmatched origin requests by design. |
@ -215,7 +215,7 @@ Push mode (this guide) works today but requires an inbound-reachable URL — whi
Your agent makes only outbound HTTPS calls to the platform, pulling messages from an inbox queue and posting replies back. Works behind any NAT/firewall, tolerates offline laptops, no tunnel needed.
See the [design doc](https://github.com/Molecule-AI/internal/blob/main/product/external-workspaces-polling.md) (internal) and [implementation tracking issue](https://github.com/Molecule-AI/molecule-core/issues?q=polling+mode) once opened.
See the [design doc](https://git.moleculesai.app/molecule-ai/internal/src/branch/main/product/external-workspaces-polling.md) (internal) and the implementation tracking issue (search `polling+mode` on the [molecule-core issue tracker](https://git.moleculesai.app/molecule-ai/molecule-core/issues)).
---
@ -255,7 +255,7 @@ If all four pass and canvas still shows your agent as unreachable, see the [remo
## Feedback
This is a new path. Tell us what broke:
- Open an issue: https://github.com/Molecule-AI/molecule-core/issues/new?labels=external-workspace
- Open an issue: https://git.moleculesai.app/molecule-ai/molecule-core/issues/new?labels=external-workspace
- Join #external-workspaces on our Slack
- Submit a PR improving this doc if something tripped you up — the faster we can make the quickstart, the more developers we bring in
[`molecule-ai/molecule-ai-workspace-template-starter`](https://git.moleculesai.app/molecule-ai/molecule-ai-workspace-template-starter) (note: the starter repo did not survive the 2026-05-06 GitHub-org-suspension migration; recreation tracked at internal#41)
— it ships with the canonical Dockerfile + adapter.py skeleton + config.yaml
schema + the `repository_dispatch: [runtime-published]` cascade receiver
"_comment":"Pin refs to release tags for reproducible builds. 'main' is OK while all repos are internal.",
"_comment":"OSS surface registry — every repo listed here MUST be public on git.moleculesai.app. Layer-3 customer/private templates are NOT registered here; they are handled at provision-time via the per-tenant credential resolver (see internal#102 RFC). 'main' refs are pinned to tags before broad rollout.",
@ -11,7 +11,7 @@ There are three related scripts; pick the right one:
|---|---|---|
| `measure-coordinator-task-bounds.sh` | **Canonical** v1 harness for the RFC #2251 / Issue 4 reproduction. Provisions a PM coordinator + Researcher child via `claude-code-default` + `langgraph` templates, sends a synthesis-heavy A2A kickoff, observes elapsed time + activity trace. | OSS-shape platform — localhost or any `/workspaces`-shaped endpoint. Has tenant/admin-token guards for non-localhost runs. |
| `measure-coordinator-task-bounds-runner.sh` | Generalised runner for the same measurement contract but with **arbitrary template + secret + model combinations** (Hermes/MiniMax, etc.). Useful for cross-runtime variants without modifying the canonical harness. | Same as above (local or SaaS via `MODE=saas`). |
| `measure-coordinator-task-bounds.sh` (in [molecule-controlplane](https://github.com/Molecule-AI/molecule-controlplane)) | **Production-shape** variant that bootstraps a real staging tenant via `POST /cp/admin/orgs`, then runs the same measurement against `<slug>.staging.moleculesai.app`. | Staging controlplane only — refuses to run against production. |
| `measure-coordinator-task-bounds.sh` (in [molecule-controlplane](https://git.moleculesai.app/molecule-ai/molecule-controlplane)) | **Production-shape** variant that bootstraps a real staging tenant via `POST /cp/admin/orgs`, then runs the same measurement against `<slug>.staging.moleculesai.app`. | Staging controlplane only — refuses to run against production. |
See `reference_harness_pair_pattern` (auto-memory) for when to use which
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.