Catches the bot-generated-structurally-invalid-Go class that took
staging Platform(Go) red for hours on 2026-04-22 (PR #1769 commit
66ea0b64 nested a function declaration inside another function's body).
The patch tool applied it; the Go parser rejected it; every Go PR
targeting staging during the window failed CI through no fault of its
own.
Hook now runs `cd workspace-server && go build ./...` when any .go
file in workspace-server/ is staged. If the build fails, commit is
rejected with the first 20 lines of build output. Skip-with-warning
when go isn't installed (CI runners + bots without go bypass cleanly).
Cost: ~5-10s per commit that touches Go on a warm cache. Acceptable
for the class of bug it catches — the alternative (catch at PR-time
via CI) is too late, the malformed commit is already shared.
This is one of the three guards proposed in #1770. The other two
(branch-protection on `Platform (Go)` as required check; SHARED_RULES
clarification on bot-PR overrides) are admin / process changes that
need your action.
Closes the pre-commit half of #1770. Branch-protection + SHARED_RULES
work tracks separately.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds a prominent section to CONTRIBUTING.md documenting that public
content (blog, marketing, OG images, SEO briefs, DevRel demos) belongs
in Molecule-AI/docs, not molecule-core. Mirrors the routing cheat-sheet
from #2060 with the table of content-type → target repo, and points
contributors at the existing `Block forbidden paths` CI gate as the
loud-fail signal.
Per the issue: 11 content PRs were silently blocked over 48h before
being closed and redirected. This in-repo notice gives contributors
(human and agent) a discoverable spot to learn the rule before opening
the wrong PR. The CI gate is already enforcing the policy; this just
makes the rule self-service.
Closes#2060
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The harness runner (scripts/measure-coordinator-task-bounds-runner.sh)
calls `/workspaces/:id/activity?since_secs=$A2A_TIMEOUT` to scope a
trace to a specific test window. The query param was silently
ignored — `ActivityHandler.List` accepted only `type`, `source`, and
`limit`, so the runner got the most-recent-100 events regardless of
how long ago they happened. Works for fresh-tenant tests where
activity_logs is ~empty pre-run, breaks on busy tenants and on tests
that exceed 100 events.
Adds `since_secs` parsing with three behaviors:
- Valid positive int → `AND created_at >= NOW() - make_interval(secs => $N)`
on the SQL. Parameterised; values bound via lib/pq, not interpolated.
`make_interval(secs => $N)` is required — the `INTERVAL '$N seconds'`
literal form rejects placeholder substitution inside the string.
- Above 30 days (2_592_000s) → silently clamped to the cap. Defends
against a paranoid client triggering a multi-month full-table scan
via `since_secs=999999999`.
- Negative, zero, or non-integer → 400 with a structured error, NOT
silently dropped. Silent drop is exactly the bug this is fixing
— a typoed param shouldn't be lost as most-recent-100.
Tests cover all four paths: accepted (with arg-binding assertion via
sqlmock.WithArgs), clamped at 30 days, invalid rejected (5 sub-cases),
and omitted (verifies no extra clause / arg leak via strict WithArgs
count).
RFC #2251 §V1.0 step 6 (platform-side-transition audit) also depends
on this for time-window filtering of activity_logs.
Closes#2268
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Per-workspace `restartState` entries (introduced under the name
`restartMu` pre-#2266, renamed to `restartStates` in #2266) are
created via `LoadOrStore` in `workspace_restart.go` but never
deleted. On a long-running platform process serving many short-lived
workspaces (E2E tests, transient sandbox tenants), the sync.Map grows
monotonically — ~16 bytes per workspace ever created.
Fix: call `restartStates.Delete(wsID)` after stopAndRemove +
ClearWorkspaceKeys for each cascaded descendant and the parent. Mirrors
the existing per-ID cleanup loop. `sync.Map.Delete` is safe on absent
keys, so workspaces that were never restarted (no LoadOrStore call)
are no-op.
This is a pre-existing leak — #2266 did not introduce it; just renamed
the holder. Filing as a separate commit to keep the change minimal and
reviewable.
Closes#2269
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The pre-#2290 \`force: true\` flag on POST /org/import skipped the
required-env preflight, letting orgs import without their declared
required keys (e.g. ANTHROPIC_API_KEY). The ux-ab-lab incident: that
import path was used, the org shipped without ANTHROPIC_API_KEY in
global_secrets, and every workspace 401'd on the first LLM call.
Per #2290 picks (C/remove/both):
- Q1=C: template-derived required_env (no schema change — already
the existing aggregation via collectOrgEnv).
- Q2=remove: drop the bypass entirely. The seed/dev-org flow that
legitimately needs to skip becomes a separate dry-run-import path
with its own audit trail, not a permission bypass.
- Q3=block-at-import-only: provision-time drift logging is a
follow-up; for this PR, blocking at import is the gate.
Surface change:
- Force field removed from POST /org/import request body.
- 412 \"suggestion\" text drops the \"or pass force=true\" guidance.
- Legacy callers sending {\"force\": true} are silently tolerated
(Go's json.Unmarshal drops unknown fields), so no client-side
breakage; the bypass effect is just gone.
Audited callers in this repo:
- canvas/src/components/TemplatePalette.tsx — never sends force.
- scripts/post-rebuild-setup.sh — never sends force.
- Only external tooling sent force=true. Those callers must now set
the global secret via POST /settings/secrets before importing.
Adds TestOrgImport_ForceFieldRemoved as a structural pin: if a future
change re-adds Force to the body struct, the test fails and forces an
explicit reckoning with the #2290 rationale.
Closes#2290
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
PR #2265 renamed the harness trace endpoint and event name; sync the
cross-repo scripts/README.md to match.
Closes#2270
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes#2289.
Some workspace template images ship `/usr/local/bin/{git,gh}` wrappers
that bake `GH_TOKEN` into argv handling (preferred — auto-PR creation
authenticates without explicit token plumbing); other templates have
plain `/usr/bin/git` installed via apt with no wrapper. The hardcoded
`_GIT = "/usr/local/bin/git"` crashed every auto-push attempt on the
latter image class:
FileNotFoundError: [Errno 2] No such file or directory: '/usr/local/bin/git'
File "/app/molecule_runtime/executor_helpers.py", line 524, in _auto_push_and_pr_sync
subprocess.run(['/usr/local/bin/git', 'rev-parse', '--is-inside-work-tree'], ...)
`shutil.which("git")` walks PATH in order — finds the `/usr/local/bin/`
wrapper first when it exists, falls back to `/usr/bin/git` otherwise.
GH_TOKEN injection still wins on wrapper-equipped images; auto-push
no longer crashes on bare-apt images.
Verified locally: `shutil.which("git")` resolves to `/usr/bin/git` on
the bug-reporter's image; `shutil.which("gh")` resolves to the
homebrew path on dev. Both paths exist + are executable on respective
hosts.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Surfaced via cross-template review of the a2a-sdk v0→v1 migration:
every adapter executor (claude-code, gemini-cli, crewai, openclaw,
autogen) builds A2A response Messages independently using
`new_text_message(text)` from the SDK, which omits `task_id` and
`context_id`. The runtime's own canonical pattern in
`workspace/a2a_executor.py:466-475` correctly threads both:
Message(
message_id=uuid.uuid4().hex,
role=Role.ROLE_AGENT,
parts=_parts,
task_id=task_id, # ← canonical
context_id=context_id, # ← canonical
)
Adapters skipping these correlation fields means the platform's a2a
proxy can't reliably tie the response back to the originating task.
This is a divergence from canonical, not necessarily a strict bug
(task_id may be optional with a default) — but it's enough of a
correlation/observability gap that the canonical pattern bothers to
thread it.
Add `new_response_message(context, text, files=None)` to
executor_helpers.py — single home for response Message construction.
Templates can migrate from `new_text_message(text)` to this helper
in stacked PRs once the runtime publishes to PyPI.
The helper:
- Reads `context.task_id`/`context.context_id` from the inbound
RequestContext, falling back to fresh UUIDs (RequestContextBuilder
always sets them in production; fallback is for unit tests).
- Sets `role=Role.ROLE_AGENT` (the v1 enum value).
- Builds text Parts via `Part(text=...)` and file Parts via
`Part(url="workspace:<path>", filename=..., media_type=...)`.
- Returns a v1 protobuf Message ready for
`event_queue.enqueue_event(...)`.
Why "files=None" with the workspace: URI scheme as the file Part
shape: matches the canonical pattern in a2a_executor.py exactly so
the platform's chat-attachment download path (executor_helpers.py
`resolve_attachment_uri`) interprets responses uniformly across all
adapters.
Tests (5, all pass with --no-cov against the live runtime image):
- test_new_response_message_text_only
- test_new_response_message_with_files
- test_new_response_message_files_only_no_text
- test_new_response_message_falls_back_when_context_ids_unset
- test_new_response_message_handles_missing_attrs
The conftest's a2a stubs needed an extension for Message + Role +
Part with kwargs preservation. Strictly additive — no existing tests
affected. (The 19 pre-existing failures in test_executor_helpers.py
are unrelated debt from the commit_memory/recall_memory rewrite,
visible on staging baseline before this change.)
Per-template migration is the follow-up: claude-code, gemini-cli,
crewai, openclaw, autogen all call `new_text_message(text)` today;
each gets a per-repo PR replacing it with
`new_response_message(context, text)`. This PR ships the helper
first so the templates have something to import.
Refs: PR #2266/#2267 (restart-race), claude-code #15 (FilePart fix),
gemini-cli #10/crewai #8/openclaw #9/autogen #8 (rename PRs).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Self-review caught a regression I introduced in #2266: if cycle() panics
(e.g. a future provisionWorkspace nil-deref or any runtime error from
the DB / Docker / encryption stacks it touches), the loop never reaches
`state.running = false`. The flag stays true forever, the early-return
guard at the top of coalesceRestart fires for every subsequent call,
and that workspace is permanently locked out of restarts until the
platform process restarts.
The pre-fix code had similar exposure (panic killed the goroutine
before defer wsMu.Unlock() ran in some Go versions), but my pending-
flag version made it worse: the guard is sticky, not ephemeral.
Fix: defer the state-clear so it always runs on exit, including panic.
Recover (and DON'T re-raise) so the panic doesn't propagate to the
goroutine boundary and crash the whole platform process — RestartByID
is always called via `go h.RestartByID(...)` from HTTP handlers, and
an unrecovered goroutine panic in Go terminates the program. Crashing
the platform for every tenant because one workspace's cycle panicked
is the wrong availability tradeoff. The panic message + full stack
trace via runtime/debug.Stack() are still logged for debuggability.
Regression test in TestCoalesceRestart_PanicInCycleClearsState:
1. First call's cycle panics. coalesceRestart's defer must swallow
the panic — assert no panic propagates out (would crash the
platform process from a goroutine in production).
2. Second call must run a fresh cycle (proves running was cleared).
All 7 tests pass with -race -count=10.
Surfaced via /code-review-and-quality self-review of #2266; the
re-raise-after-recover anti-pattern (originally argued as "don't
mask bugs") came up in the comprehensive review and was corrected
to log-with-stack-and-suppress for availability.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The naive mutex-with-TryLock pattern in RestartByID was silently dropping
the second of two close-together restart requests. SetSecret and SetModel
both fire `go restartFunc(...)` from their HTTP handlers, and both DB
writes commit before either restart goroutine reaches loadWorkspaceSecrets.
If the second goroutine arrives while the first holds the per-workspace
mutex, TryLock returns false and the second is logged-and-dropped:
Auto-restart: skipping <id> — restart already in progress
The first goroutine's loadWorkspaceSecrets ran before the second write
committed, so the new container boots without that env var. Surfaced
during the RFC #2251 V1.0 measurement as hermes returning "No LLM
provider configured" when MODEL_PROVIDER landed after the API-key write
and lost its restart to the mutex (HERMES_DEFAULT_MODEL absent →
start.sh fell back to nousresearch/hermes-4-70b → derived
provider=openrouter → no OPENROUTER_API_KEY → request-time error).
The same race hits any back-to-back secret/model save flow including
the canvas's "set MiniMax key + pick model" UX.
Fix: pending-flag / coalescing pattern. Any restart request that arrives
while one is in flight sets `pending=true` and returns. The in-flight
runner, on completion, checks the flag and runs another cycle. This
collapses N concurrent requests into at most 2 sequential cycles (the
current one + one more that picks up everyone who arrived during it),
while guaranteeing the final container always sees the latest secrets.
Concrete contract:
- 1 request, no concurrency: 1 cycle
- N concurrent requests during 1 in-flight cycle: 2 cycles total
- N sequential requests (no overlap): N cycles
- Per-workspace state — different workspaces never serialize
Coalescing is extracted into `coalesceRestart(workspaceID, cycle func())`
so the gate logic is testable without the full WorkspaceHandler / DB /
provisioner stack. RestartByID now wraps that with the production cycle
function. runRestartCycle calls provisionWorkspace SYNCHRONOUSLY (drops
the historical `go`) so the loop's pending-flag check happens AFTER the
new container is up — without that, the next cycle's Stop call would
race the previous cycle's still-spawning provision goroutine.
sendRestartContext stays async; it's a one-way notification.
Tests in workspace_restart_coalesce_test.go cover all five contract
points + race-detector clean over 10 iterations:
- Single call → 1 cycle
- 5 concurrent during in-flight → exactly 2 cycles total
- 3 sequential → 3 cycles
- Pending-during-cycle picked up (targeted bug repro)
- State cleared after drain (running flag reset)
- Per-workspace isolation (no cross-workspace serialization)
Refs: molecule-core#2256 (V1.0 gate measurement); root cause for the
"No LLM provider configured" symptom seen during hermes/MiniMax repro.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The runner was speculatively calling `/workspaces/:id/heartbeat-history` —
that endpoint doesn't exist on workspace-server. On local dev it 404'd;
on tenant builds the platform's :8080 canvas-proxy fallback intercepted
it and returned 28KB of Next.js HTML which then landed in the JSON event
log. Neither outcome was useful trace data.
`GET /workspaces/:id/activity` is the existing endpoint that reads
activity_logs. That table already records the events the RFC §V1.0
step 6 'platform-side transition' check needs (a2a_send / a2a_receive /
task_update / agent_log / error, plus duration_ms + status). Rename
the runner's fetch + emitted event accordingly.
Verified: GET /workspaces/<uuid>/activity?since_secs=60 returns 200
with `[]` against the local platform; no SaaS skip needed since the
endpoint exists in both environments.
Refs: molecule-core#2256 (V1.0 gate #1 measurement comment).
Three review-driven fixes to the runner before #2261 merges:
1. `WAIT_ONLINE_SECS / 3` truncated; an operator passing 200 actually
waited 198s. Round up so 200 → 67 polls × 3s = 201s ≥ requested.
2. The heartbeat-history endpoint isn't on tenant workspace-servers —
the platform's :8080 fallback proxies unmatched paths to the
canvas Next.js, so the SaaS run captured 28KB of HTML in the
`heartbeat_trace` event log. Skip the fetch in MODE=saas; emit an
explicit `<skipped: ...>` placeholder. Local mode behaviour
unchanged.
3. ORG_ID and ORG_SLUG had no client-side format check, so a typo'd
value got swallowed by TenantGuard's intentionally-opaque 404
(which doesn't tell the operator whether slug, UUID, or auth was
wrong). Validate UUID and slug shape up front; matching errors
are actionable.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two docs covering load-bearing patterns from today's work that
weren't previously discoverable:
1. workspace/platform_tools/README.md — explains the ToolSpec
single-source-of-truth pattern (#2240), the CLI-block alignment
gap that hand-maintained generation can't close (#2258), the
snapshot golden files + LF-pinning (#2260), and the add/rename/
remove playbook. The next reader who lands in
workspace/platform_tools/ now has the design rationale + the
safe-edit procedure colocated with the code.
2. scripts/README.md — disambiguates the three measure-coordinator-
task-bounds.sh files that now exist across two repos:
- scripts/measure-coordinator-task-bounds.sh (canonical OSS, this repo)
- scripts/measure-coordinator-task-bounds-runner.sh (Hermes/MiniMax variant, this repo)
- scripts/measure-coordinator-task-bounds.sh (production-shape, in molecule-controlplane)
Cross-references reference_harness_pair_pattern (auto-memory) for
the cross-repo design rationale. Documents the common safety
pattern (cleanup trap, DRY_RUN, non-target guard,
cleanup_*_failed events) and the heartbeat-trace caveat.
Refs: #2240, #2254, #2257, #2258, #2259, #2260; molecule-controlplane#321.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two docs covering load-bearing patterns from today's work that
weren't previously discoverable:
1. workspace/platform_tools/README.md — explains the ToolSpec
single-source-of-truth pattern (#2240), the CLI-block alignment
gap that hand-maintained generation can't close (#2258), the
snapshot golden files + LF-pinning (#2260), and the add/rename/
remove playbook. The next reader who lands in
workspace/platform_tools/ now has the design rationale + the
safe-edit procedure colocated with the code.
2. scripts/README.md — disambiguates the three measure-coordinator-
task-bounds.sh files that now exist across two repos:
- scripts/measure-coordinator-task-bounds.sh (canonical OSS, this repo)
- scripts/measure-coordinator-task-bounds-runner.sh (Hermes/MiniMax variant, this repo)
- scripts/measure-coordinator-task-bounds.sh (production-shape, in molecule-controlplane)
Cross-references reference_harness_pair_pattern (auto-memory) for
the cross-repo design rationale. Documents the common safety
pattern (cleanup trap, DRY_RUN, non-target guard,
cleanup_*_failed events) and the heartbeat-trace caveat.
Refs: #2240, #2254, #2257, #2258, #2259, #2260; molecule-controlplane#321.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The original measure-coordinator-task-bounds.sh was hardcoded for
local-dev (workspace-server on :8080) with claude-code/langgraph
templates and OPENROUTER_API_KEY. Running it against staging requires
both auth-chain plumbing (per-tenant ADMIN_TOKEN + X-Molecule-Org-Id
TenantGuard header + tenant subdomain routing) and template/secret
flexibility (e.g. Hermes/MiniMax for Token Plan keys).
This adds:
* `measure-coordinator-task-bounds-runner.sh` — separate runner that
wraps the same workspace-server API calls but takes everything as
env-var inputs. Two MODE values:
- `local` → direct workspace-server (no auth/tenant scoping)
- `saas` → tenant subdomain + per-tenant ADMIN_TOKEN bearer +
X-Molecule-Org-Id TenantGuard header. Auto-fetches
tenant token via CP /cp/admin/orgs/<slug>/admin-token
given ORG_SLUG + CP_ADMIN_API_TOKEN, OR accepts a
pre-resolved TENANT_ADMIN_TOKEN.
* Configurable PM_TEMPLATE / CHILD_TEMPLATE / MODEL / SECRET_NAME /
SECRET_VALUE — defaults match the original (claude-code-default +
langgraph + OpenRouter). Hermes/MiniMax example documented in the
header.
* Per-poll status_change events during wait_online, so a workspace
that never reaches online surfaces its last status (provisioning,
failed, etc.) instead of a bare timeout.
* WAIT_ONLINE_SECS knob (default 180s; SaaS cold-start needs ~420s
for first hermes-image pull on a freshly-provisioned EC2 tenant).
* `${args[@]+...}` guard on the api() helper — avoids `set -u`
exploding on an empty header array (the local-dev hot-path).
The original script also gained a SECRET_VALUE block earlier in the
session — that change (separately staged) makes the secret-name
configurable without forcing every operator through the new runner.
V1.0 gate #1 (RFC #2251, Issue 4 repro) measurement results posted
as a separate comment on molecule-core#2256.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Self-review follow-up on #2258 (registry snapshot tests, just merged).
The byte-exact snapshot comparisons in test_platform_tools.py would
fail mysteriously on a Windows contributor's machine with
core.autocrlf=true: checkout would convert LF → CRLF, the test would
fail locally with no useful diagnostic, and the regen instructions
in the test-file header would produce LF files that disagree with
the working copy.
Pin workspace/tests/snapshots/*.txt to text eol=lf so this can't
happen. All three current snapshots are already LF; the attribute
ensures it stays that way.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Self-review follow-ups on #2257:
- Drop `local exit_code=$?` from cleanup(). `trap`-handler return values
are ignored, so capturing $? only misled a future reader into thinking
exit-code preservation was happening.
- Replace silenced `>/dev/null 2>&1` DELETE with `-w '%{http_code}'`
capture. ADMIN_TOKEN expiring mid-run was the realistic failure mode
here — previously we swallowed it under the silenced redirect, leaving
workspaces leaked with no signal. Now a 401/403/5xx surfaces as a
`cleanup_failed` JSON event with a remediation hint pointing at
cleanup-rogue-workspaces.sh; 404 is treated as success (the
post-condition — workspace absent — holds).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two follow-ups from the #2240 code review:
1. Snapshot tests for the rendered tool-instruction blocks. The
structural tests added in #2240 guarantee tool NAMES are present;
these new tests pin the SHAPE — bullet ordering, heading style,
footer placement — so a future contributor who reorders fields in
`_render_section` or rewrites a `when_to_use` paragraph sees the
diff in CI rather than shipping a silently-different system prompt.
Golden files live under workspace/tests/snapshots/.
2. CLI-block alignment test + corrected source-of-truth comment.
`_A2A_INSTRUCTIONS_CLI` is a separate hand-maintained surface for
ollama and other non-MCP runtimes — the registry can't auto-generate
it because the CLI subprocess interface uses different command
shapes (`peers` vs `list_peers`, etc.). A new
`_CLI_A2A_COMMAND_KEYWORDS` mapping declares the registry-tool →
CLI-keyword correspondence (or explicit `None` for tools not
exposed via subprocess). Two tests enforce coverage:
- every a2a tool in the registry is keyed in the mapping
- every non-None subcommand keyword literally appears in
`_A2A_INSTRUCTIONS_CLI`
Caught one real gap: `send_message_to_user` is in the registry but
has no CLI subcommand. Mapped to `None` with an explanatory comment.
The "no other source of truth" claim in registry.py's docstring
was wrong post-#2240 (the CLI block survived) — corrected to
describe the two surfaces explicitly and point at the alignment
tests as the gate.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Three follow-ups from #2254 code review before the harness is safe to
run against staging:
1. Cleanup trap. Workspaces are now auto-deleted on EXIT/INT/TERM. A
Ctrl-C mid-run no longer leaks the PM + Researcher pair against
shared infra. KEEP_WORKSPACES=1 opts out for post-run inspection.
2. Tenant scoping + admin auth. Non-localhost PLATFORM values now
require both ADMIN_TOKEN and TENANT_ID; the script refuses to run
without them. The previous version sent unauthenticated POSTs that,
on staging, would either 401 every request or — worse — provision
into the wrong tenant. Memory `feedback_never_run_cluster_cleanup_
tests_on_live_platform` calls out the same hazard class.
3. DRY_RUN=1 mode. Prints platform target, tenant id, auth fingerprint,
and the planned actions, then exits before any state mutation. The
intended pre-flight before running against staging.
Also tightened OR_KEY check (the chained default silently accepted an
empty OPENROUTER_API_KEY) and added a heartbeat-trace caveat to the
interpretation guide explaining what `<endpoint_unavailable>` means
for the bound question.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds structured `rfc2251_phase=...` log lines at the deterministic phase
boundaries inside route_task_to_team and check_task_status, so an
operator running scripts/measure-coordinator-task-bounds.sh against
staging can correlate the harness's external timing trace with what
phase the coordinator was in at any given second.
The harness already exists in staging and measures end-to-end response
time + heartbeat trace. What it CAN'T do without this PR is answer
"the coordinator response took 7 minutes — was it stuck delegating, or
stuck polling children, or stuck synthesizing after all children
returned?" The phase logs answer that question.
Phases instrumented (deterministic Python boundaries, no agent prompt
involvement):
route_start → enter route_task_to_team
children_fetched → after get_children() returns
routing_decided → after build_team_routing_payload
delegate_invoked → just before delegate_task_async.ainvoke
delegate_returned → after delegate_task_async returns
check_status → every check_task_status poll (per-poll)
route_returning_decision_only → fall-through path
Each line includes elapsed_ms from route_start so per-phase durations
are extractable via:
grep rfc2251_phase= <container.log> \
| awk '{...}' to compute deltas between consecutive phases
The synthesis phase (after all children return, before agent emits
final A2A response) is NOT instrumented here because it's
agent-driven (no deterministic Python boundary). The harness operator
infers synthesis_secs = total_response_secs − max(check_status_ts).
This is reproduction-harness scaffolding; it adds zero behavior. Strip
the rfc2251_phase log lines when V1.0 ships and the phase data lands
in the structured heartbeat payload instead.
Refs:
- RFC: molecule-core#2251
- Harness: scripts/measure-coordinator-task-bounds.sh (shipped earlier)
- V1.0 gate: this is deliverable #2 of the four pre-V1.0 gates
Adds a reproduction harness for Issue 4 of the 2026-04-28 CP review,
referenced in RFC molecule-core#2251. The RFC review (issue #2251
comment) flagged that Issue 4 was hypothesized but not reproduced
before V1.0 implementation begins — this script closes that gap.
What it does:
- Provisions a coordinator (PM, claude-code-default) + 1 child
(Researcher, langgraph) via the platform API.
- Sends an A2A kickoff with a synthesis-heavy task that requires
SYNTHESIS_DEPTH (default 3) sequential delegations followed by a
600-word post-delegation synthesis.
- Times the coordinator's full A2A round-trip with millisecond
precision and emits one JSON event per phase (machine-readable).
- Pulls the coordinator's heartbeat trace post-run so the team can
see whether any platform-side state transition fired during the
long synthesis (the V1.0 RFC's MAX_TASK_EXECUTION_SECS would
surface as such a transition; absence of one in this trace
confirms the RFC's premise).
Why a measurement harness, not a pass/fail test:
Issue 4's claim is "absence of platform-side bound", which is hard
to assert in a single CI run. Outputting structured measurement
data lets the team interpret across multiple runs / staging vs
prod / different SYNTHESIS_DEPTH values rather than relying on one
reproduction snapshot.
The script's header has the full interpretation guide:
- ELAPSED < 60s → not informative (LLM was just fast)
- 60–300s → within DELEGATION_TIMEOUT, ambiguous
- >= 300s without trace transitions → BUG CONFIRMED
- curl_failed → coordinator hung past A2A_TIMEOUT or genuinely
slow (disambiguate by querying status separately)
Doesn't run in CI by default — invoked manually against staging or a
local platform with PLATFORM=... and OPENROUTER_API_KEY=... env vars.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Add a comment block at the top of auto-promote-staging.yml naming the
load-bearing one-time repo setting that the workflow depends on:
Settings → Actions → General → Workflow permissions
→ ✅ Allow GitHub Actions to create and approve pull requests
Without this toggle, every workflow_run fails with
"GitHub Actions is not permitted to create or approve pull requests
(createPullRequest)". Observed 2026-04-29 01:43 UTC blocking the
fcd87b9 promotion (PRs #2248 + #2249); manually bridged via PR #2252.
The setting is invisible to anyone reading the workflow file, but the
workflow cannot do its job without it. Documenting here so the next
time it gets toggled off (org admin change, repo migration, audit
cleanup) the failure mode points at the cause rather than another
round of "why is auto-promote broken."
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Mirror the sweep-cf-orphans hardening (#2248) on publish-runtime's
TEMPLATE_DISPATCH_TOKEN gate. The previous behaviour was to print
:⚠️:skipping cascade — templates will pick up the new version
on their own next rebuild and exit 0. That message is wrong: the 8
workspace-template repos only rebuild on this repository_dispatch
fanout. Without the dispatch they stay pinned to whatever runtime
version they last saw, and the gap is invisible until someone
notices a template several versions behind weeks later.
Behaviour after this PR:
- push (auto-trigger on workspace/runtime/** changes) → exit 1
- workflow_dispatch (manual operator) → exit 0
with a warning (operator already accepted state; let them rerun
after restoring the secret)
The token-missing path now also names the consequence concretely
("templates will NOT pick up the new version until this token is
restored") so future operators see the actionable line, not the
misleading "they'll catch up on their own" message.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Replace the soft-skip-with-warning behaviour for scheduled runs of the
hourly Cloudflare orphan sweeper with an explicit failure when the six
required secrets aren't set. Manual workflow_dispatch keeps the
soft-skip path so an operator can short-circuit a deliberate rerun
without redoing the secrets dance — they accepted the state when they
clicked the button.
Why: from some-date to 2026-04-28, all six secrets were unset on the
repo. Every hourly tick printed a yellow :⚠️: and exited 0,
which GitHub registers as "completed/success" — the sweeper was
indistinguishable from a healthy janitor with nothing to do. Cloudflare
orphans accumulated unobserved to 152/200 (~76% of the zone quota),
and only surfaced via a manual audit. The mechanism to catch this kind
of regression is to make the workflow loud: red runs prompt
investigation, green runs are presumed healthy.
Schedule/workflow_run/push paths now print three ::error:: lines
naming the missing secrets, the fix, and a one-line reference to this
incident, then exit 1.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Mirrors the fix#2234 applied to auto-sync-main-to-staging.yml in the
reverse direction. Both workflows now use the same merge-queue path
that humans use; no special-case bypass.
Why
Every tick of auto-promote-staging.yml since main's branch protection
went stricter has been failing with:
remote: error: GH006: Protected branch update failed for refs/heads/main.
remote: - Required status checks "Analyze (go)", "Analyze (javascript-typescript)",
"Analyze (python)", "Canvas (Next.js)", "Detect changes",
"E2E API Smoke Test", "Platform (Go)", "Python Lint & Test",
and "Shellcheck (E2E scripts)" were not set by the expected
GitHub apps.
remote: - Changes must be made through a pull request.
The previous version did `git merge --ff-only origin/staging &&
git push origin main` directly. That works against a permissive
branch — it doesn't work against a ruleset that requires checks
satisfied by the expected GitHub apps. Only PR merges through the
queue produce check runs from the right apps.
Result was that today's 12+ merges to staging never propagated to
main; the auto-promote ran every tick and failed every tick, while
operators had to keep opening manual `staging → main` bridges.
Fix
- Replace the direct git push step with a step that opens (or reuses)
a PR base=main head=staging and enables auto-merge. The merge queue
lands it once gates are green on the merge_group ref.
- The PR's head IS the staging branch (no per-SHA promote branch
needed) — the whole purpose is "advance main to staging's tip".
- Add `pull-requests: write` permission so the workflow can call
gh pr create + gh pr merge --auto.
- Drop the `git merge-base --is-ancestor` divergence check — the
merge queue itself enforces branch protection now, and rejects
the PR if main has diverged from staging history.
Loop safety preserved: when this PR's merge lands on main, it
triggers auto-sync-main-to-staging.yml which opens a sync PR back
to staging. That sync PR's eventual merge is by GITHUB_TOKEN (the
merge queue) which doesn't trigger downstream workflow_run events
— so auto-promote-staging.yml does NOT re-fire from its own merge
landing.
Refs: #2234 (the parallel fix for auto-sync-main-to-staging.yml),
task #142, multiple failing runs visible in
https://github.com/Molecule-AI/molecule-core/actions/workflows/auto-promote-staging.yml