Per documentation-specialist's grep agent (2026-05-07T07:30, see
internal#46): runtime-breaking ghcr.io references in shell scripts +
docker-compose + the slip-past-workflow lint_secret_pattern_drift.py
all need migration. These were missed by security-auditor's
workflow-only audit.
Files (6):
- .github/scripts/lint_secret_pattern_drift.py:40 — workspace-runtime
pre-commit-checks.sh consumer URL: raw.githubusercontent.com →
Gitea raw URL (https://git.moleculesai.app/molecule-ai/.../raw/
branch/main/...). The lint job runs in CI and would 404 today.
- scripts/refresh-workspace-images.sh:54 — workspace-template image
pull URL: ghcr.io → ECR (153263036946.dkr.ecr.us-east-2.amazonaws.com).
- scripts/rollback-latest.sh — full rewrite of header + auth flow:
* ghcr.io/molecule-ai/{platform,platform-tenant} → ECR
* GITHUB_TOKEN with write:packages → AWS ECR auth
(aws ecr get-login-password). Per saved memory
reference_post_suspension_pipeline, prod cutover is to ECR.
* Updated header docs to match new auth flow + prereqs.
- scripts/demo-freeze.sh:13,17 — comment-only ghcr → ECR
(the script doesn't currently exec these URLs, but the comments
describe the cascade and need to match reality).
- docker-compose.yml:215-216 — canvas image: ghcr.io → ECR + updated
the auth comment to describe `aws ecr get-login-password` flow.
- tools/check-template-parity.sh:21 — inline curl install instructions:
raw.githubusercontent.com → Gitea raw URL.
Hostile self-review:
1. rollback-latest.sh's GITHUB_TOKEN→aws-cli auth swap is a behavior
change. Operators using this script now need aws CLI
authenticated for region us-east-2 with ECR pull/push perms.
Documented in updated header. Operators who don't have aws CLI
will get 'aws: command not installed' which is a clear failure
mode (not silent).
2. The Gitea raw URL shape (/raw/branch/main/) differs from GitHub's
raw.githubusercontent.com structure. Verified pattern by
inspecting other Gitea raw URLs in the codebase. If Gitea's URL
changes (1.23+), update via the same one-line edit.
3. Doesn't touch packer/scripts/install-base.sh which has a similar
ghcr.io ref per the grep agent's findings — that's bigger-scope
(packer build pipeline) and lives in molecule-controlplane-ish
territory; filing as parked follow-up under #46 if not already.
Refs: molecule-ai/internal#46, molecule-ai/internal#37,
molecule-ai/internal#38, saved memory reference_post_suspension_pipeline
Closes the silent-block failure mode that left 25 commits — including
the Memory v2 redesign and the reno-stars data-loss fix — wedged on
staging for 12+ hours behind a single missing review. The auto-promote
workflow opened the PR + armed auto-merge, but main's branch protection
required a human review and nobody noticed until a user reported
"still seeing old memory tab".
## Detection logic — `scripts/check-stale-promote-pr.sh`
Reads open PRs `base=main head=staging` and alarms on:
- `mergeStateStatus == BLOCKED`
- `reviewDecision == REVIEW_REQUIRED`
- createdAt older than `STALE_HOURS` (default 4h)
Other BLOCKED reasons (DIRTY, BEHIND, failed checks) are NOT alarmed —
those are the author's signal-to-fix. This script targets the specific
"no human reviewed yet" wedge.
Output:
- `::warning` per stale PR (visible in workflow summary + Actions UI)
- PR comment (idempotent via marker-string detection; one alarm
per PR, never re-spammed)
- Exit code = count of stale PRs (capped at 125)
Logic in a script (not inline workflow YAML) so it's:
- **Unit-testable** — tests/test-check-stale-promote-pr.sh exercises
every branch with stubbed fixture JSON + frozen clock. 23 tests
covering: empty list, single stale, just-under-threshold, wrong
reviewDecision, wrong mergeStateStatus, mixed list (only matching
PRs alarm), custom threshold via --stale-hours, exit-code-counts-
matching-PRs, --help, unknown arg → 64, missing repo → 2.
- **Operator-runnable ad-hoc** — `scripts/check-stale-promote-pr.sh`
works from any shell with `gh` + `jq`.
- **SSOT** — one detector, the workflow YAML is just schedule +
invocation surface. Future sibling workflows that need the same
check call the same script.
## Workflow — `.github/workflows/auto-promote-stale-alarm.yml`
Triggers:
- cron `27 * * * *` (hourly, off-the-hour to dodge cron herd)
- workflow_dispatch with `stale_hours` + `post_comment` overrides
Concurrency: `auto-promote-stale-alarm` group, cancel-in-progress=false
(idempotent script; no benefit to cancelling a running scan).
Permissions: `contents: read` + `pull-requests: write` (post comments).
Sparse checkout — only fetches `scripts/check-stale-promote-pr.sh`.
No node_modules, no go modules, no slow setup steps. Workflow runs
in <30s on a clean repo.
## Why "alarm + comment" not "auto-approve"
Considered options in issue #2975:
1. Slack/email alert — picked.
2. Bot-account auto-approve via molecule-ops — circumvents the
human-review gate that branch protection encodes.
3. Trusted-promote bypass via CODEOWNERS — needs Org Admin config
change; out of scope for a workflow PR.
The comment-on-PR pattern picks (1) without external dependencies
(no Slack token, no email config). Subscribers get notified via
GitHub's existing PR notification delivery; the warning shows up in
the Actions feed.
## Why this won't false-positive on legitimate slow reviews
Threshold is 4h. Most legitimate gates clear in <1h, so 4× headroom
is plenty for slow CI. The comment is idempotent (one alarm per PR,
never re-posted) — adding noise stops at 1 comment regardless of
how long the PR sits.
## Test plan
- [x] `bash scripts/test-check-stale-promote-pr.sh` — 23/23 pass
- [x] `python3 -c 'yaml.safe_load(...)'` clean
- [x] `bash -n` clean on both scripts
- [ ] Live verification: dispatch the workflow once main has caught up,
confirm it correctly reports zero stale PRs
The drift gate caught the new SSOT parser module — without registration
the wheel ships it un-rewritten and runtime imports fail. Same pattern
as inbox_uploads, a2a_tools_delegation, a2a_tools_rbac registrations.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes#2934 item 6 — the deferred follow-up from Ryan's onboarding-
friction report. Quote: "this single command would have saved me
30 of the 45 minutes."
When push delivery fails or the install half-works, the operator
today has no signal — they hand-grep the Claude Code binary or
chase the `from versions: none` red herring. Doctor renders six
checks in one screen with concrete next-step suggestions:
1. Python version >=3.11? (wheel's pin)
2. Wheel install molecule-ai-workspace-runtime importable +
version surfaced
3. PATH for binary `molecule-mcp` resolves on PATH; if not,
prints the resolved user-site bin dir to
add (or recommends pipx)
4. Env vars PLATFORM_URL + WORKSPACE_ID + token (env or
*_FILE or .auth_token)
5. Platform reach GET ${PLATFORM_URL}/healthz returns 2xx
6. Registry register POST /registry/register with the resolved
token returns 2xx — end-to-end auth check
Each line: `[OK|WARN|FAIL] <label>: <status>` plus a `next:` hint
when not OK. ANSI colors auto-disable on non-TTY / NO_COLOR.
Exit code: 0 on all-OK or only-WARN, 1 on any FAIL — scriptable
from CI install-checks.
## Files
`workspace/mcp_doctor.py` (new) — six check functions + `run()`
entry point. Uses urllib (stdlib)
so doctor works even on a partial
install where `requests` is missing.
`workspace/mcp_cli.py` Subcommand dispatch:
molecule-mcp doctor → mcp_doctor.run()
molecule-mcp --help → usage banner
molecule-mcp → server (unchanged)
`workspace/tests/test_mcp_doctor.py` (new) — 10 tests covering each
check's pass/fail/skip path
plus the end-to-end exit-code
contract on a stripped env.
`scripts/build_runtime_package.py` Adds `mcp_doctor` to
TOP_LEVEL_MODULES so the
wheel ships the new module.
## Out of scope (deferred follow-ups)
- Claude Code-specific checks (parse ~/.claude.json, verify each
MCP entry is plugin-sourced + dev-channels flag set). That's a
separate Claude-Code-shaped doctor; lives in the channel plugin.
- Automated remediation. Doctor is diagnostic — tells the operator
what's wrong + how to fix it, doesn't apply changes.
## Verification
- python -m pytest tests/test_mcp_doctor.py -v → 10/10 PASS
- python -m pytest tests/test_mcp_cli*.py → 67/67 PASS
(existing CLI suite still green; subcommand dispatch added
before env-validation, doesn't disturb the server-boot path)
- manual: `molecule-mcp doctor` on a stripped env renders 4 FAIL
+ 2 WARN + exit code 1, with each `next:` hint actionable
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Self-review of #2935 turned up two real defects:
1. Stale README issue references — the build_runtime_package.py
README template said "(issue #2934 follow-up)" twice, but the
marketplace-plugin and `doctor` items now have dedicated tracking
issues. Updated to point at #2936 and #2937 respectively.
2. Silent fallthrough on broken MOLECULE_WORKSPACE_TOKEN_FILE — when
an operator EXPLICITLY pointed TOKEN_FILE at a path that didn't
exist / wasn't readable / was blank / contained internal whitespace,
the resolver silently returned the generic "set one of these three
vars" error. That's exactly the silent failure mode #2934 flagged
("a new user has no chance"). Refactor `_read_token_from_file_env`
to return `(token, error)`; surface the SPECIFIC failure when the
operator's intent was clearly the file path. Skip the CONFIGS_DIR
fallback in that case so the operator's config bug isn't masked
by a different source happening to work.
Adds 2 renames + 2 new tests in test_mcp_cli_split.py:
- test_missing_file_returns_specific_error (asserts "does not exist")
- test_empty_file_returns_specific_error (asserts "is empty")
- test_multi_line_file_rejected (asserts "internal whitespace")
- test_token_file_error_skips_configs_dir_fallback (asserts a valid
CONFIGS_DIR/.auth_token does NOT silently rescue a broken
TOKEN_FILE)
All 81 mcp_cli + mcp_cli_multi_workspace + mcp_cli_split tests pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Continues the OSS-shape refactor. After iters 4a-4d (rbac, delegation,
memory, messaging) the only behavior left in ``a2a_tools.py`` was
``report_activity`` plus three thin inbox-tool wrappers and the
``_enrich_inbound_for_agent`` helper. This iter extracts the inbox
slice to ``a2a_tools_inbox.py`` so the kitchen-sink module shrinks
from 280 LOC to ~165 LOC of imports + report_activity + back-compat
re-export blocks.
Extracted symbols:
- ``_INBOX_NOT_ENABLED_MSG`` (sentinel)
- ``_enrich_inbound_for_agent`` (poll-path peer enrichment helper)
- ``tool_inbox_peek``
- ``tool_inbox_pop``
- ``tool_wait_for_message``
Re-exports (`from a2a_tools_inbox import …`) preserve the public
``a2a_tools.tool_inbox_*`` surface so existing tests + call sites
continue to resolve unchanged.
New tests in test_a2a_tools_inbox_split.py:
1. **Drift gate (5)** — every previously-public symbol on a2a_tools
is the EXACT same object as a2a_tools_inbox.foo (`is`, not `==`),
catches a future "wrap with logging" refactor that silently loses
existing test coverage.
2. **Import contract (1)** — a2a_tools_inbox does NOT eagerly import
a2a_tools at module load. Pins the layered architecture: the
extracted slice depends on ``inbox`` + a lazy ``a2a_client``
import, never on the kitchen-sink that re-exports it.
3. **_enrich_inbound_for_agent branches (5)** — peer_id-empty
(canvas_user) returns dict unchanged; missing peer_id key same;
a2a_client unavailable (test harness, partial install) degrades
gracefully with a bare envelope; registry hit populates
peer_name + peer_role + agent_card_url; registry miss still
surfaces agent_card_url (constructable from peer_id alone).
The full timeout-clamp / validation / JSON-shape behavior matrix for
the three wrappers stays in test_a2a_tools_inbox_wrappers.py — those
tests pass identically against both the alias and the underlying impl.
Wiring updates:
- ``scripts/build_runtime_package.py``: add ``a2a_tools_inbox`` to
``TOP_LEVEL_MODULES`` so it ships in the runtime wheel and the
drift gate doesn't fail the next publish.
- ``.github/workflows/ci.yml``: add ``a2a_tools_inbox.py`` to
``CRITICAL_FILES`` so the 75% MCP/inbox/auth per-file floor
applies — this is now where the inbox-delivery code actually
lives.
Ryan's bug report (#2934) walked through ~45 min of debugging a stock
external-runtime install. This PR fixes the four items he flagged that
have a small surface, and stubs out the larger ones for follow-up.
Fixed in this PR
================
#1 — Python floor disclosure (README in publish bundle)
Add an explicit "Requires Python ≥3.11" section that calls out the
cryptic "Could not find a version that satisfies the requirement"
failure mode; recommend `pipx install` over `pip install` so the
binary lands on PATH automatically; show the explicit `pip install
--user` alternative with the PATH caveat.
#3 — MOLECULE_WORKSPACE_TOKEN_FILE support (mcp_workspace_resolver.py)
Add a third resolution step between the inline env var and the
in-container CONFIGS_DIR fallback. Operators can write the bearer to
a 0600 file (e.g. ~/.config/molecule/token) and point
MOLECULE_WORKSPACE_TOKEN_FILE at it, keeping the secret out of
~/.zsh_history and out of plaintext in MCP-host configs like
~/.claude.json. Inline TOKEN still wins on conflict so rotation flows
are predictable. README documents the safer option as the
recommended path. 6 new tests pin every leg (file resolves, inline
wins, missing/empty file falls through, blank env unset-equivalent,
help text advertises it).
#4 — Push delivery 3-condition gating (README in publish bundle)
Document that real-time push on Claude Code requires (a) the server
to declare experimental.claude/channel (we do), (b) the server to be
marketplace-plugin-sourced (operators must scaffold their own until
the official marketplace lands — see #2934 follow-up), and (c) the
--dangerously-load-development-channels flag on the claude
invocation. Until any of the three is in place, delivery silently
falls back to poll mode with no diagnostic. The README now says all
of this explicitly so a new operator doesn't grep the binary for
channel_enable to figure it out.
#8 — serverInfo.name mismatch (a2a_mcp_server.py)
The server reported `serverInfo.name = "a2a-delegation"` while
operators register it as `molecule` (the name in `claude mcp add
molecule …`). Harmless on tool routing today but matters for any
future Claude Code allowlist that gates push by hardcoded server
name. Renamed to "molecule" with an inline comment explaining the
invariant.
Deferred (separate issues to track)
===================================
#2 — covered transitively by #1's pipx recommendation; no separate fix.
#5 — `moleculesai/claude-code-plugin` marketplace repo (substantial new
repo work; the README references it as a documented follow-up).
#6 — `molecule-mcp doctor` subcommand (substantial new CLI surface;
mentioned in the README's push-vs-poll section as the planned
diagnostic for silent push fallback).
#7 — `--dangerously-load-development-channels` rename — not in our
control; that's Claude Code's flag.
Tests
=====
164/164 mcp_cli + a2a_mcp_server tests pass locally
(WORKSPACE_ID=00000000-0000-0000-0000-000000000001 pytest …) including
6 new TestTokenFileEnv cases. Wheel builds successfully via
scripts/build_runtime_package.py with the new README markers verified
in the output.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Fourth slice of the a2a_tools.py split (stacked on iter 4c). Owns the
four human-and-peer messaging MCP tools + the chat-upload helper:
* _upload_chat_files — stage local paths to /chat/uploads
* tool_send_message_to_user — push canvas-chat via /notify
* tool_list_peers — discover peers across registered workspaces
* tool_get_workspace_info — JSON-encode workspace info
* tool_chat_history — fetch prior conversation rows with a peer
a2a_tools.py shrinks from 508 → 213 LOC (−295). The remaining 213
is just report_activity + back-compat re-exports. Inbox tools
(tool_inbox_peek/pop/wait_for_message) deferred to iter 4e.
Layered architecture: messaging depends on a2a_tools_rbac (iter 4a),
a2a_client, platform_auth — NOT on kitchen-sink a2a_tools. An
import-contract test pins this so future refactors that add
`from a2a_tools import …` fail in CI.
Tests:
* 28 patch sites in TestToolSendMessageToUser + TestToolListPeers +
TestToolGetWorkspaceInfo + TestChatHistory retargeted from
`a2a_tools.{httpx, get_peers_*, get_workspace_info,
_upload_chat_files, _peer_*, list_registered_workspaces}` to
`a2a_tools_messaging.…` because the call sites moved.
* test_a2a_tools_messaging.py adds 7 new tests:
- 5 alias drift gates
- 2 import-contract tests (no top-level a2a_tools dep + a2a_tools
surfaces every messaging symbol)
137 tests total in the a2a_tools suite, all green.
Refs RFC #2873.
Third slice of the a2a_tools.py split (stacked on iter 4b). Owns the
two persistent-memory MCP tools:
* tool_commit_memory — write to /workspaces/:id/memories with RBAC
+ GLOBAL-scope tier-zero enforcement
* tool_recall_memory — search /workspaces/:id/memories with RBAC
a2a_tools.py shrinks from 609 → 508 LOC (−101). Both handlers depend
ONLY on a2a_tools_rbac (iter 4a), a2a_client, and the platform's
/memories endpoint — no entanglement with delegation or messaging.
Side-effects of the layered architecture: a2a_tools_memory's import
contract is "depends on a2a_tools_rbac, never on a2a_tools" — the
kitchen-sink module is for back-compat re-exports only. A test pins
this so a future refactor that re-introduces `from a2a_tools import …`
fails in CI.
Tests:
* 49 patch sites in TestToolCommitMemory + TestToolRecallMemory
retargeted from `a2a_tools.{_check_memory_*, _is_root_workspace,
httpx.AsyncClient}` to `a2a_tools_memory.…` because the call sites
moved.
* test_a2a_tools_memory.py adds 4 new tests (alias drift gate +
import-contract + a2a_tools-side re-export).
117 tests total (77 impl + 28 rbac + 8 delegation + 4 memory), all green.
Refs RFC #2873.
Iter 4a's new module needs to be in the rewrite list so the wheel
ships its imports prefixed correctly. Caught by 'PR-built wheel +
import smoke'.
Refs RFC #2873 iter 4a.
The iter-3 split created mcp_heartbeat / mcp_inbox_pollers /
mcp_workspace_resolver but the wheel build's drift-gate check at
scripts/build_runtime_package.py:TOP_LEVEL_MODULES wasn't updated.
Without this fix the wheel ships those modules un-rewritten, so
their imports of platform_auth / configs_dir / etc. break at
runtime. Caught by the 'PR-built wheel + import smoke' check.
Refs RFC #2873 iter 3.
The drift gate in build_runtime_package.py rejects any workspace/*.py
module not listed in TOP_LEVEL_MODULES — it would ship un-rewritten
and break wheel imports. Add inbox_uploads (introduced in this PR)
to the list.
PR #2756 piped adapter.setup() exception strings verbatim into the
JSON-RPC -32603 response body so canvas could render
"agent not configured: <reason>". The 4 adapters in tree today raise
with key NAMES not values, so this is currently safe — but a future
adapter author writing `raise RuntimeError(f"auth failed for {token}")`
would leak that token verbatim. Issue #2760 flagged the risk; this PR
closes it.
workspace/secret_redactor.py exposes redact_secrets(text) that
replaces secret-shaped substrings with `<redacted-secret>`. Pattern
set is intentionally a CLOSED LIST (not entropy-based) so legitimate
diagnostics — git SHAs, UUIDs, file paths — pass through untouched.
Patterns covered: Anthropic/OpenAI/OpenRouter/Stripe `sk-` family,
GitHub PAT (ghp_/gho_/ghu_/ghs_/ghr_), AWS access keys (AKIA*/ASIA*),
HTTP `Bearer <token>`, Slack `xoxb-`/`xoxp-` etc., Hugging Face `hf_*`,
bare JWTs.
Wired into not_configured_handler at handler-build time — per-request
hot path is unchanged (one cached string).
Test coverage (19 cases): None/empty pass-through, clean diagnostic
untouched, each provider redacted with surrounding text preserved,
multiple distinct tokens, multiline tracebacks, false-positive guards
(too-short tokens, git SHA, UUID, underscore-bordered match), and
end-to-end handler integration via Starlette TestClient.
Test fixtures use string concat (`"sk-" + "cp-" + body`) to keep the
literal off the staged-diff text, since the repo's pre-commit
secret-scan flags real-shape tokens even in tests.
`secret_redactor` registered in TOP_LEVEL_MODULES (drift gate).
Closes#2760
Pairs with: PR #2756, PR #2775
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
PR #2756's contract — card route always mounted regardless of
adapter.setup() outcome — lived inline in main.py's `# pragma: no cover`
boot sequence. A future refactor that re-coupled the two would have
silently bypassed PR #2756 and shipped the original "stuck booting
forever" UX again, with no pytest catching it.
This change extracts route assembly into workspace/boot_routes.py's
build_routes(card, executor, adapter_error) and pins the contract with
6 integration tests using Starlette's TestClient:
- test_card_route_serves_200_when_adapter_ready: happy path
- test_card_route_serves_200_when_adapter_failed: misconfigured boot,
card still 200, skill stubs survive
- test_jsonrpc_returns_503_when_no_executor: full -32603 envelope with
the adapter_error in error.data
- test_jsonrpc_returns_503_with_generic_when_no_error_string: fallback
reason for the rare case main.py reaches this branch without one
- test_card_route_does_not_depend_on_executor: direct PR #2756
regression guard — both branches MUST mount the card route
- test_executor_present_does_not_mount_not_configured_handler: sanity
that a healthy workspace doesn't return -32603 to every request
Conftest stubs extended with a2a.server.routes / request_handlers
classes so the tests work under the existing a2a-mock infra (pattern
matches the AgentCard/AgentSkill stubs added for PR #2765).
main.py now calls build_routes; the inline if/else is gone. Same
production behaviour, cleaner shape, regression-proof.
Heavy a2a-sdk imports inside build_routes() are lazy (deferred to the
executor-only branch) so tests that only exercise the not-configured
path don't pull DefaultRequestHandler / InMemoryTaskStore.
card_helpers + boot_routes registered in TOP_LEVEL_MODULES (build
drift gate would have caught the missing entry on the wheel-publish
smoke).
All 18 related tests pass (test_boot_routes.py: 6, test_card_helpers.py:
6, test_not_configured_handler.py: 6).
Closes#2761
Pairs with: PR #2756 (decouple agent-card from setup),
PR #2765 (defensive isolation of enrichment + transcript)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
PR #2756 added a try/except around adapter.setup() so a missing LLM key
doesn't crash the workspace boot. Two paths that now run AFTER setup
succeeds were not similarly isolated, leaving small but real coupling
risks for future adapter authors.
1. **Skill metadata enrichment swap (main.py:248-259).** When
adapter.setup() returns, main.py reads adapter.loaded_skills and
replaces the static stubs in agent_card.skills with rich metadata
(description, tags, examples). The list comprehension assumes each
element exposes .metadata.{id,name,description,tags,examples}. A
future adapter that returns a non-canonical shape would raise
AttributeError, propagate to the outer except, capture as
adapter_error, and silently degrade an OK boot to the
not-configured state — even though setup() actually succeeded.
Extract to card_helpers.enrich_card_skills(card, loaded_skills) →
bool. Helper swallows enrichment failures, logs the cause, returns
False, leaves the static stubs in place. setup() success path
continues unchanged. 6 unit tests cover: None input, empty list,
canonical happy path, missing .metadata attr, partial .metadata
(missing one canonical field), atomic-failure-no-partial-swap.
2. **/transcript handler (main.py:513).** Calls await
adapter.transcript_lines(...) without try/except. BaseAdapter's
default returns {"supported": false} so today's 4 adapters never
trigger this — but a future adapter override that assumes setup()
ran would surface as a 500 from Starlette's default error handler
instead of a useful 503 with the exception class + message.
Inline try/except returns 503 with the reason, matching the
not-configured JSON-RPC handler's pattern.
Both changes match the architectural principle the PR #2756 chain
established: availability (workspace reachable) is decoupled from
configuration / adapter behavior. Operators see useful errors instead
of silent degradation; future adapter authors can't accidentally
break tenant readiness with a shape mismatch.
Adds:
- workspace/card_helpers.py (~50 lines, 100% covered)
- workspace/tests/test_card_helpers.py (6 tests)
- AgentCard/AgentSkill/AgentCapabilities/AgentInterface stubs to
workspace/tests/conftest.py so future card-related tests work
under the existing a2a-mock infrastructure
- card_helpers in TOP_LEVEL_MODULES (drift gate would have caught it)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The wheel-build drift gate caught the new module added in this PR —
without registering it, the published wheel would ship `import
not_configured_handler` un-rewritten, which would `ModuleNotFoundError`
at runtime under `molecule_runtime.main`.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the recurrence path of PR #2556. The data fix realigned 8→4
templates in publish-runtime.yml's TEMPLATES variable, but the
underlying drift hazard was unguarded — the next manifest change
could silently leave cascade out of sync again.
This gate fails any PR that changes manifest.json or
publish-runtime.yml in a way that makes the cascade list diverge
from manifest workspace_templates (suffix-stripped). Either
direction is caught:
missing-from-cascade templates that won't auto-rebuild on a new
wheel publish (the codex-stuck-on-stale-runtime
bug class — PR #2512 added codex to manifest,
cascade wasn't updated, codex stayed pinned to
its last-built runtime version for weeks).
extra-in-cascade cascade dispatches to deprecated templates
(the wasted-API-calls + dead-CI-noise class —
PR #2536 pruned 5 templates from manifest;
cascade kept dispatching to all 8 until
PR #2556).
Triggers narrowly: only on PRs that touch manifest.json,
publish-runtime.yml, or the script itself. Fast (single grep+sed+comm
pipeline, no Go build).
Surfaced during the RFC #388 prior-art audit; folded in as the
structural follow-up to the data fix#2556 promised.
Self-tested both failure modes locally before commit:
- Drop codex from cascade → script fails with "MISSING: codex"
- Add langgraph to cascade → script fails with "EXTRA: langgraph"
Refs: https://github.com/Molecule-AI/molecule-controlplane/issues/388
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
CP's deprovision flow calls Secrets.DeleteSecret() (provisioner/ec2.go:806)
but only when the deprovision runs to completion. Crashed provisions and
incomplete teardowns leak the per-tenant `molecule/tenant/<org_id>/bootstrap`
secret. At ~$0.40/secret/month, ~45 leaked secrets surfaced as ~$19/month
on the AWS cost dashboard.
The tenant_resources audit table (mig 024) tracks four kinds today —
CloudflareTunnel, CloudflareDNS, EC2Instance, SecurityGroup — and the
existing reconciler doesn't catch Secrets Manager orphans. The proper fix
(KindSecretsManagerSecret + recorder hook + reconciler enumerator) is filed
as a follow-up controlplane issue. This sweeper is the immediate stopgap.
Parallel-shape to sweep-cf-tunnels.sh:
- Hourly schedule offset (:30, between sweep-cf-orphans :15 and
sweep-cf-tunnels :45) so the three janitors don't burst CP admin
at the same minute.
- 24h grace window — never deletes a secret younger than the
provisioning roundtrip, so an in-flight provision can't be racemurdered.
- MAX_DELETE_PCT=50 default (mirrors sweep-cf-orphans for durable
resources; tenant secrets should track 1:1 with live tenants).
- Same schedule-vs-dispatch hardening as the other janitors:
schedule → hard-fail on missing secrets, dispatch → soft-skip.
- 8-way xargs parallelism, dry-run by default, --execute to delete.
Requires a dedicated AWS_JANITOR_* IAM principal — the prod molecule-cp
principal lacks secretsmanager:ListSecrets (it only has scoped
Get/Create/Update/Delete). The workflow's verify-secrets step will hard-fail
on the first scheduled run until those secrets are configured, surfacing
the missing setup loudly rather than silently no-op'ing.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The wheel-build drift gate caught it correctly: any new top-level
module under workspace/ must be listed in TOP_LEVEL_MODULES so its
`from event_log import …` statements get rewritten to
`from molecule_runtime.event_log import …` at package time.
Without this entry, the published wheel ships event_log.py un-rewritten
and crashes at runtime with ModuleNotFoundError on first heartbeat.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The harness needs Authorization + X-Molecule-Org-Id (per-tenant, NOT
CP_ADMIN_API_TOKEN) when targeting *.moleculesai.app subdomains.
Existing single-Origin-header form silent-failed with 404 against
staging tenants since the SaaS edge WAF rewrites unauthenticated
/workspaces calls to Next.js (per
reference_saas_waf_origin_header.md).
Switch to a headers array so multiple -H flags compose cleanly with
curl arg-quoting, and document the env var contract at the top of
the script.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds two scripts:
scripts/test-all-runtimes-a2a-e2e.sh
Provisions one workspace per runtime (claude-code, hermes, codex,
openclaw), sets provider keys, waits online, sends two A2A messages
per workspace. First message validates round-trip; second message
validates session continuity. Cleans up via trap on EXIT.
scripts/test-hermes-plugin-e2e.sh
Hermes-only variant focused on the plugin /a2a/inbound path.
Proof-point: session continuity between turns (the plugin path's
deliverable; old chat-completions path lost context per turn).
Both honor SKIP_<runtime> env vars for incremental testing and tolerate
the SaaS edge WAF Origin header requirement (per
reference_saas_waf_origin_header.md).
Run:
PLATFORM=https://demo-tenant.staging.moleculesai.app \\
./scripts/test-all-runtimes-a2a-e2e.sh
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The hourly Sweep stale Cloudflare Tunnels job got cancelled mid-cleanup
on 2026-05-02 (run 25248788312, killed at 5min after deleting 424/672
stale tunnels). A second manual dispatch finished the remaining 254
fine, so the immediate backlog cleared, but two underlying bugs would
re-trip on the next big cleanup.
Bug 1: serial delete loop. The execute branch was a `while read; do
curl -X DELETE; done` pipeline at ~0.7s/tunnel — fine for the
steady-state cleanup of a handful, but a 600+ backlog needs ~7-8min.
This commit fans out to $SWEEP_CONCURRENCY (default 8) workers via
`xargs -P 8 -L 1 -I {} bash -c '...' _ {} < "$DELETE_PLAN"`. With 8x
parallelism the same 600+ list drains in ~60s. Notes:
- We use stdin (`<`) not GNU's `xargs -a FILE` so the script stays
portable to BSD xargs (matters for local-runner testing on macOS).
- We pass ONLY the tunnel id on argv. xargs tokenizes on whitespace
by default; tab-separating id+name on argv risks mangling. The
name is kept in a side-channel id->name map ($NAME_MAP) and looked
up by the worker only on failure, for FAIL_LOG readability.
- Workers print exactly `OK` or `FAIL` on stdout; tally with
`grep -c '^OK$' / '^FAIL$'`.
- On non-zero FAILED, log the first 20 lines of $FAIL_LOG as
"Failure detail (first 20):" — same diagnostic surface as before
but consolidated so we don't spam logs on a flaky CF API.
Bug 2: workflow's 5-min cap was set as a hangs-detector but turned out
to be a real-job-too-slow detector. Raised to 30 min — generous
headroom for the ~60s steady-state run while still surfacing genuine
hangs (and in line with the sweep-cf-orphans companion job).
Bug 3 (drive-by): the existing trap was `trap 'rm -rf "$PAGES_DIR"'
EXIT`, which would have been silently overwritten by any later trap
registration. Replaced with a single `cleanup()` function that wipes
PAGES_DIR + all four new tempfiles (DELETE_PLAN, NAME_MAP, FAIL_LOG,
RESULT_LOG), called once via `trap cleanup EXIT`.
Verification:
- bash -n scripts/ops/sweep-cf-tunnels.sh: clean
- shellcheck -S warning scripts/ops/sweep-cf-tunnels.sh: clean
- python3 yaml.safe_load on the workflow: clean
- Synthetic 30-line delete plan with every 7th id sentinel'd to
return {"success":false}: TEST PASS, DELETED=26 FAILED=4, FAIL_LOG
side-channel name lookup verified.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The page-merge loop passed the entire accumulating tunnel JSON to
python3 -c via argv on every iteration. On a busy account (verified
2026-05-02: 672 tunnels, 14 pages on Hongmingwangrabbit account) this
exceeds the GH Ubuntu runner's combined argv+envp limit (~128 KB) and
dies with `python3: Argument list too long` at exit 126 — the workflow
has been silently failing this way since the very first run that hit a
real account, masked earlier by a missing-CF_ACCOUNT_ID secret check.
Buffer each page response to a file under a temp dir, merge from disk
at the end. Also bumps the page cap from 20 to 40 (1000 → 2000 tunnel
ceiling) so the existing soft-cap warning has headroom; the disk-merge
shape is O(n) in tunnel count rather than the previous O(n^2) so the
larger ceiling is cheap.
Verified locally against the live account (672 tunnels): script now
runs cleanly to the existing MAX_DELETE_PCT safety gate, which trips
at 99% > 90% as designed and surfaces the actual orphan backlog for
operator-driven cleanup.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Wheel-build smoke gate detected `configs_dir` missing from
scripts/build_runtime_package.py:TOP_LEVEL_MODULES. Without it the
build would ship `import configs_dir` un-rewritten and every
external-runtime install would die on `ModuleNotFoundError` at first
import.
Two callers used `import configs_dir as _configs_dir` to belt-and-
suspenders against an imagined name collision, but the rewriter
rejects `import X as Y` because the rewrite would produce
`import molecule_runtime.X as X as Y` (invalid syntax). No actual
collision exists (only docstring/comment references). Switched to
plain `import configs_dir`.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Demo-day preparation bundle for the funding demo (~2026-05-06). Adds:
- scripts/demo-freeze.sh — captures current ghcr.io
workspace-template-* :latest digests for all 8 runtimes, then
disables both cascade vectors that could re-tag :latest mid-demo:
publish-runtime.yml in molecule-core (PATH 1 — staging push to
workspace/** auto-bumps the wheel and fans out to 8 templates) and
publish-image.yml in each of the 8 template repos (PATH 2 — direct
template repo merge re-tags :latest). Defaults to dry-run; requires
--execute to apply. Writes both digest + workflow receipts to
scripts/demo-freeze-snapshots/.
- scripts/demo-thaw.sh — re-enables every workflow demo-freeze.sh
disabled, keyed off the receipt timestamp. Defaults to executing
(the inverse safety polarity from freeze, where the destructive
default is dry-run). --dry-run prints without applying.
- scripts/demo-day-runbook.md — operator runbook indexing the six
rollback levers (platform image rollback, template image rollback,
tenant redeploy, workspace delete, Railway rollback, Vercel
rollback) plus pre-warm timing and post-demo cleanup. Also covers
read-only diagnostics for "is this working?" moments and the
CP_ADMIN_API_TOKEN rotation step that must follow demo (the token
gets copy-pasted into shells during incident response).
- scripts/demo-freeze-snapshots/.gitignore — generated freeze
receipts are operational state, not source. Tracked .gitkeep so
the directory exists when the script writes to it.
Both scripts dry-run-tested locally. Did not exercise --execute since
that would actually disable production workflows mid-development.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The existing wheel-publish smoke (`wheel_smoke.py`) only IMPORTS
`molecule_runtime.main` at module scope. Lazy imports buried inside
`async def execute(...)` bodies (e.g. `from a2a.types import FilePart`)
NEVER evaluate at static-import time — they crash at first message
delivery in production.
The 2026-04-2x v0→v1 a2a-sdk migration shipped 5 such regressions in
templates that all looked fine at module-load smoke. This change adds
`smoke_mode.py` plus a `MOLECULE_SMOKE_MODE=1` short-circuit in
`main.py`: after `adapter.create_executor(...)`, the boot path invokes
`executor.execute(stub_ctx, stub_queue)` once with a 5s timeout
(`MOLECULE_SMOKE_TIMEOUT_SECS`). Healthy import tree → execution
proceeds far enough to hit a network boundary and times out (exit 0).
Broken lazy import → `ImportError` / `ModuleNotFoundError` from inside
the executor body (exit 1). Other downstream errors (auth, validation)
pass — those are caught by adapter-level tests, not this gate.
Stub `(RequestContext, EventQueue)` is built from the real a2a-sdk so
SendMessageRequest/RequestContext constructor changes also surface as
import-tree failures (the regression class also includes "SDK
refactored mid-publish"). The stub-build itself is wrapped — if it
raises, that's a smoke fail too.
Phase 2 (separate PR, molecule-ci) wires this into
publish-template-image.yml so the publish gate runs the boot smoke
against every template image before pushing the tag.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two small follow-ups to the PR #2433 → #2436 → #2439 incident chain.
1) `import inbox # noqa: F401` in workspace/a2a_mcp_server.py was
misleading — `inbox` IS used (at the bridge wiring inside main()).
F401 means "imported but unused", which would mask a real future
F401 if the usage is removed. Drop the noqa, keep the explanatory
block comment about the rewriter's `import X` → `import mr.X as X`
expansion (and the `import X as Y` → `import mr.X as X as Y` trap
the comment exists to prevent re-introducing).
2) scripts/test_build_runtime_package.py — 17 unit tests covering
`rewrite_imports()` and `build_import_rewriter()` in
scripts/build_runtime_package.py. Until now the function had zero
coverage despite the entire wheel build depending on it. Tests
pin: bare-import aliasing, dotted-import preservation, indented
imports, from-imports (simple + dotted + multi-symbol + block),
the `import X as Y` rejection added in PR #2436 (with comment-
stripping + indented + comma-not-alias edge cases), allowlist
anchoring (`a2a` ≠ `a2a_tools`), and end-to-end reproduction
of the PR #2433 failing pattern + the #2436 fix pattern.
3) Wire scripts/test_*.py into CI by adding a second discover pass
to test-ops-scripts.yml. Top-level scripts/ tests live alongside
their target file (parallels the scripts/ops/ test layout); the
existing scripts/ops/ pass keeps running because scripts/ops/
has no __init__.py so a single discover from scripts/ root
doesn't recurse. Two passes is simpler than retrofitting
namespace packages. Path filter widened from `scripts/ops/**`
to `scripts/**` so PRs touching the build script trigger the
new tests.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
PR #2433 (notifications/claude/channel) shipped 'import inbox as
_inbox_module' inside a2a_mcp_server.py:main(). The build script's
import rewriter expands plain 'import inbox' to
'import molecule_runtime.inbox as inbox', so the original source
became 'import molecule_runtime.inbox as inbox as _inbox_module',
which is invalid Python.
Caught at the publish-runtime + PR-built-wheel-smoke gate (the
SyntaxError trace is in run 25200422679). The wheel didn't ship to
PyPI because publish-runtime's smoke-import step refused to install
it, but staging is currently sitting on a broken-build commit until
this fix-forward lands.
Changes:
- a2a_mcp_server.py: lift `import inbox` to top of file (rewriter
produces clean `import molecule_runtime.inbox as inbox`), call
inbox.set_notification_callback directly in main()
- build_runtime_package.py: rewrite_imports() now raises ValueError
when it sees 'import X as Y' for any X in the workspace allowlist,
instead of silently producing a syntax-error wheel. Operator gets
a clear actionable error at build time pointing at the offending
line + suggested rewrites ('from X import …' or plain 'import X').
The build-time gate (this PR's rewriter check) catches the regression
class earlier than the smoke-time gate (PR #2433's failure). Adding
'PR-built wheel + import smoke' to staging branch protection's
required checks is filed separately so this class doesn't merge again.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds a notification seam to the universal molecule-mcp wheel so push-
notification-capable MCP hosts (Claude Code today; any compliant
client tomorrow) get inbound A2A messages as conversation interrupts
instead of having to poll wait_for_message / inbox_peek.
Wire-up:
- inbox.py: module-level _NOTIFICATION_CALLBACK + set_notification_callback()
Fires from InboxState.record() AFTER lock release, with same dict
shape inbox_peek returns. Best-effort — a raising callback never
prevents the message from landing in the queue.
- a2a_mcp_server.py: _build_channel_notification() pure helper +
bridge wiring in main() that schedules notifications via
asyncio.run_coroutine_threadsafe (poller is a daemon thread, MCP
loop is asyncio).
- Method name 'notifications/claude/channel' matches the contract
documented in molecule-mcp-claude-channel/server.ts:509.
- wheel_smoke.py: pin set_notification_callback as a published name,
same regression class as the 0.1.16 main_sync incident.
Pollers (wait_for_message / inbox_peek) keep working unchanged for
runtimes without notification support.
Tests: 6 new in test_inbox.py (callback fires once on record, dedupe
short-circuits before fire, raising cb doesn't break inbox, set/clear
semantics), 5 new in test_a2a_mcp_server.py (method name pin, content
mapping, meta routing, no-id JSON-RPC notification spec, missing-
field tolerance). All 59 combined tests pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The universal MCP server (a2a_mcp_server.py) was outbound-only — agents
in standalone runtimes (Claude Code, hermes, codex, etc.) could
delegate, list peers, and write memories, but never observed the
canvas-user or peer-agent messages addressed to them. This blocked
"constantly responding" loops without forcing operators back onto a
runtime-specific channel plugin.
This PR closes the inbound gap with a poller-fed in-memory queue and
three new MCP tools:
- wait_for_message(timeout_secs?) — block until next message arrives
- inbox_peek(limit?) — list pending messages (non-destructive)
- inbox_pop(activity_id) — drop a handled message
A daemon thread polls /workspaces/:id/activity?type=a2a_receive every
5s, fills the queue from the cursor (since_id), and persists the cursor
to ${CONFIGS_DIR}/.mcp_inbox_cursor so a restart doesn't replay backlog.
On 410 (cursor pruned) we fall back to since_secs=600 for a bounded
recovery window. Activity-row → InboxMessage extraction mirrors the
molecule-mcp-claude-channel plugin's extractText (envelope shapes #1-3
+ summary fallback).
mcp_cli.main starts the poller alongside the existing register +
heartbeat threads. In-container runtimes (which have push delivery via
canvas WebSocket) skip activation, so inbox tools return an
informational "(inbox not enabled)" message instead of double-delivery.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Ship the baseline universal MCP path that any external runtime (Claude
Code, hermes, codex, anything that speaks MCP stdio) can use, before
optimizing per-runtime channels. Today the workspace MCP server only
spins up inside the container; external operators have no way to call
the 8 platform tools (delegate_task, list_peers, send_message_to_user,
commit_memory, etc.) from outside.
Three additive changes:
1. **`platform_auth.get_token()` env-var fallback** — adds
`MOLECULE_WORKSPACE_TOKEN` as a fallback when no
`${CONFIGS_DIR}/.auth_token` file exists. File-first preserves
in-container behavior unchanged. External operators (no /configs
volume) now have a way to supply the token without faking the
filesystem layout.
2. **`molecule-mcp` console script** — adds a new entry point in the
published `molecule-ai-workspace-runtime` PyPI wheel. Operators run
`pip install molecule-ai-workspace-runtime`, set 3 env vars
(WORKSPACE_ID, PLATFORM_URL, MOLECULE_WORKSPACE_TOKEN), and register
the binary in their agent's MCP config. `mcp_cli.main` is a thin
validator wrapper — it checks env BEFORE importing the heavy
`a2a_mcp_server` module so a misconfigured first-run gets a friendly
3-line error instead of a 20-line module-level RuntimeError
traceback.
3. **Wheel smoke gate** — extends `scripts/wheel_smoke.py` to assert
`cli_main` and `mcp_cli.main` are importable. Same regression class
as the 0.1.16 main_sync incident: a silent rename or unrewritten
import here would break every external operator on the next wheel
publish (memory: feedback_runtime_publish_pipeline_gates.md).
Test coverage:
- `tests/test_platform_auth.py` — 8 new tests for the env-var fallback:
file-priority, env-fallback, whitespace handling, cache, header
construction, empty-env-as-unset.
- `tests/test_mcp_cli.py` — 8 new tests for the validator: each
required var separately, file-or-env satisfies token requirement,
whitespace-only env treated as missing, help mentions canvas Tokens
tab.
- Full `workspace/tests/` suite green: 1346 passed, 1 skipped.
- Local end-to-end: built wheel, installed in venv, ran `molecule-mcp`
with no env → friendly error; with env → MCP server starts.
Why now / why this shape: user redirect was "support the baseline
first so all runtimes can use, then optimize". A claude-only MCP
channel leaves hermes/codex/third-party operators broken on
runtime=external. This PR ships the runtime-agnostic baseline; per-
runtime polish (claude-channel push delivery, hermes-native
bindings) is a follow-up PR. PR #2412 fixed the partner bug where
canvas Restart silently revoked the operator's token — the two
together unblock the external-runtime story end-to-end.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Iterates a list of tenant slugs (default canary set on production,
operator-supplied on staging), curls each tenant's /buildinfo plus
canvas's /api/buildinfo, compares to origin/main's HEAD SHA, prints a
table with one of {current, stale, unreachable} per surface. Returns
non-zero if any surface is stale, so it can be wired into a periodic
alert later.
Why this exists: every "is the fix live?" question used to be
answered with a one-off curl + git rev-parse + manual diff. This
script does that uniformly across every public surface (workspace
tenants + canvas) and is parseable. The redeploy verifier (#2398)
covers the deploy moment; this covers any-time-after.
Reads EXPECTED_SHA from `gh api repos/Molecule-AI/molecule-core/
commits/main` so it always reflects the actual upstream tip, not
local working-copy state. Falls back to local origin/main with a
WARN if `gh` isn't logged in — debugging is still useful even if
the comparison may lag.
Depends on:
- #2409 (TenantGuard /buildinfo allowlist) — without it every
tenant looks "unreachable" because the route 404s before the
handler. Already merged on staging; will hit production after
the next staging→main fast-forward + redeploy.
- #2407 (canvas /api/buildinfo) — already on main + Vercel.
Usage:
./scripts/ops/check-prod-versions.sh # production canary set
TENANT_SLUGS="a b c" ./scripts/ops/check-prod-versions.sh # custom set
ENV=staging TENANT_SLUGS="..." ./scripts/ops/check-prod-versions.sh
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
publish-runtime.yml had a broad smoke (AgentCard call-shape, well-known
mount alignment, new_text_message) inline as a heredoc. runtime-prbuild-
compat.yml had a narrow inline smoke (just `from main import main_sync`).
Result: a PR could introduce SDK shape regressions that pass at PR time
and only fail at publish time, post-merge.
Extract the broad smoke into scripts/wheel_smoke.py and invoke it from
both workflows. PR-time gate now matches publish-time gate — same script,
same assertions. Eliminates the drift hazard of two heredocs that have
to be kept in lockstep manually.
Verified locally:
* Built wheel from workspace/ source, installed in venv, ran smoke → pass
* Simulated AgentCard kwarg-rename regression → smoke catches it as
`ValueError: Protocol message AgentCard has no "supported_interfaces"
field` (the exact failure mode of #2179 / supported_protocols incident)
Path filter for runtime-prbuild-compat extended to include
scripts/wheel_smoke.py so smoke-only edits get PR-validated. publish-
runtime path filter intentionally NOT extended — smoke-only edits should
not auto-trigger a PyPI version bump.
Subset of #131 (the broader "invoke main() against stub config" goal
remains pending — main() needs a config dir + stub platform server).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
CI failure: the Ops scripts (unittest) job runs `python -m unittest
discover` which doesn't have pytest installed. test_check_migration_
collisions.py imported pytest unconditionally, failing module import:
ImportError: Failed to import test module: test_check_migration_collisions
Traceback (most recent call last):
File ".../test_check_migration_collisions.py", line 12, in <module>
import pytest
ModuleNotFoundError: No module named 'pytest'
The tests use no pytest-specific features (just bare assert + plain
class). Sibling test_sweep_cf_decide.py in the same dir already uses
unittest.TestCase. Convert this one to match: drop the pytest import,
make TestMigrationFileRe inherit from unittest.TestCase.
unittest.TestLoader.discover() requires TestCase subclasses for
auto-discovery, so the fix is two lines (drop import, add base).
Bare assert statements work fine inside TestCase methods.
Verified: `python3 -m unittest scripts.ops.test_check_migration_collisions -v`
runs all 9 tests, all pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two PRs targeting staging can each add a migration with the same
numeric prefix (e.g. 044_*.up.sql). Each passes CI independently.
They collide at merge time. Worst case: second migration silently
doesn't apply and prod schema drifts from what the code expects.
Caught manually 2026-04-30 during PR #2276 rebase: 044_runtime_image_pins
collided with 044_platform_inbound_secret from RFC #2312. This workflow
makes that detection automatic at PR-open time.
How it works:
scripts/ops/check_migration_collisions.py runs on every PR that
touches workspace-server/migrations/**. For each new/modified
migration filename, extracts the numeric prefix and checks:
1. Does the base branch already have a DIFFERENT migration file with
the same prefix? (PR branched off an old base, base advanced and
another PR landed the same number — needs rebase.)
2. Is another OPEN PR (not this one) also adding a migration with
the same prefix? (Race-window collision — both pass CI separately,
would collide at merge time.)
Either case → exit 1 with a clear ::error:: message naming the
conflicting PR(s) so the author knows what to renumber.
Implementation notes:
- Uses git ls-tree (not working-tree walk) so it works against any
base ref without checkout.
- Uses gh pr diff --name-only per open PR, bounded by `gh pr list
--limit 100`. ~30s worst case for a busy repo, <5s normally.
- --diff-filter=AM picks up Added or Modified — renaming a migration
in place is also flagged (intentional; renaming migrations isn't
safe).
- Same filename in both PR and base = no collision (PR is editing
in-place, fine).
Tests:
scripts/ops/test_check_migration_collisions.py — 9 cases on the
regex classifier (the load-bearing piece). End-to-end git/gh path
is exercised by running the workflow against real PRs.
Hard-gates Tier 1 item 1 (#2341). Cheapest, cleanest gate. Catches
one specific class of merge-time foot-gun automatically.
Refs hard-gates discussion 2026-04-30. Tier 1 of 4 (others tracked
in #2342, #2343, #2344).
Issue: scripts/dev-start.sh assumed `go` was on PATH; on a fresh dev
box without Go installed, line 111 (`go run ./cmd/server`) failed
with `go: not found` and the script bailed before printing the
readiness banner. The script's own prerequisite list (line 13-21)
said "Go 1.25+" but there was no signpost between "open the doc" and
"command not found."
Fix: detect `go` via `command -v`. If present, keep the existing
`go run` path (fast iteration, attaches to local log). If not,
fall back to `docker compose up -d --build platform` which uses the
published platform container — slower first run but the script
still works without forcing the dev to install Go just to read logs.
Either path leaves /health on :8080 so the rest of the script's
wait loop is unchanged.
If both paths fail, the error message names the install URL
(https://go.dev/dl/) and the fallback diagnostic (`/tmp/molecule-platform.log`)
so the dev has a single, actionable next step.
Verified: `sh -n` syntax check passes.
Closes#2329 item 2.
CP's tenant-delete cascade removes the DNS record (with sweep-cf-orphans
as a backstop) but does NOT delete the underlying Cloudflare Tunnel.
Each E2E provision creates one Tunnel named `tenant-<slug>`; without
cleanup these accumulate indefinitely on the account, consuming the
tunnel quota and cluttering the dashboard.
Observed 2026-04-30: dozens of `tenant-e2e-canvas-*` tunnels in Down
state with zero replicas, weeks past their tenant's deletion. Same
class of bug as the DNS-records leak that drove sweep-cf-orphans
(controlplane#239).
Parallel-shape to sweep-cf-orphans:
- Same dry-run-by-default + --execute pattern
- Same MAX_DELETE_PCT safety gate (default 90% — higher than DNS
sweep's 50% because tenant-shaped tunnels are orphans by design)
- Same schedule/dispatch hardening (hard-fail on missing secrets
when scheduled, soft-skip when dispatched)
- Cron offset to :45 to avoid CF API bursts colliding with the DNS
sweep at :15
Decision rules (in order):
1. Name doesn't match `tenant-<slug>` → keep (unknown — never sweep
tunnels that might belong to platform infra).
2. Tunnel has active connections (status=healthy or non-empty
connections array) → keep (defense-in-depth: don't kill a live
tunnel even if CP forgot the org).
3. Slug ∈ {prod_slugs ∪ staging_slugs} → keep.
4. Otherwise → delete (orphan).
Verified by:
- shell syntax check (bash -n)
- YAML lint
- Decide-logic offline smoke (7 cases, all pass)
- End-to-end dry-run smoke with stubbed CP + CF APIs
Required secrets (added to existing org-secrets):
CF_API_TOKEN must include account:cloudflare_tunnel:edit
scope (separate from zone:dns:edit used by
sweep-cf-orphans — same token if scope is
broad, or a new token if narrowly scoped).
CF_ACCOUNT_ID account that owns the tunnels (visible in
dash.cloudflare.com URL path).
CP_PROD_ADMIN_TOKEN reused from sweep-cf-orphans.
CP_STAGING_ADMIN_TOKEN reused from sweep-cf-orphans.
Note: CP-side root cause (tenant-delete should cascade to tunnel
delete) is in molecule-controlplane and worth fixing separately. This
janitor is the operational backstop in the meantime — same pattern
applied to DNS records when the same root cause was unaddressed.
Mirrors PR-C's Upload migration: replaces the docker-cp tar-stream
extraction with a streaming HTTP GET to the workspace's own
/internal/file/read endpoint. Closes the SaaS gap for downloads —
without this PR, GET /workspaces/:id/chat/download still returns 503
on Railway-hosted SaaS even after A+B+C+F land.
Stacks: PR-A #2313 → PR-B #2314 → PR-C #2315 → PR-F #2319 → this PR.
Why a single broad /internal/file/read instead of /internal/chat/download:
Today's chat_files.go::Download already accepts paths under any of the
four allowed roots {/configs, /workspace, /home, /plugins} — it's not
strictly chat. Future PRs (template export, etc.) will reuse this
endpoint via the same forward pattern; reusing avoids three near-
identical handlers (one per domain) with duplicated path-safety logic.
Path safety is duplicated on platform + workspace sides — defence in
depth via two parallel checks, not "trust the workspace."
Changes:
* workspace/internal_file_read.py — Starlette handler. Validates path
(must be absolute, under allowed roots, no traversal, canonicalises
cleanly). lstat (not stat) so a symlink at the path doesn't redirect
the read. Streams via FileResponse (no buffering). Mirrors Go's
contentDispositionAttachment for Content-Disposition header.
* workspace/main.py — registers GET /internal/file/read alongside the
POST /internal/chat/uploads/ingest from PR-B.
* scripts/build_runtime_package.py — adds internal_file_read to
TOP_LEVEL_MODULES so the publish-runtime cascade rewrites its
imports correctly. Also includes the PR-B additions
(internal_chat_uploads, platform_inbound_auth) since this branch
was rooted before PR-B's drift-gate fix; merge-clean alphabetic
additions.
* workspace-server/internal/handlers/chat_files.go — Download
rewritten as streaming HTTP GET forward. Resolves workspace URL +
platform_inbound_secret (same shape as Upload), builds GET request
with path query param, propagates response headers (Content-Type /
Content-Length / Content-Disposition) + body. Drops archive/tar
+ mime imports (no longer needed). Drops Docker-exec branch entirely
— Download is now uniform across self-hosted Docker and SaaS EC2.
* workspace-server/internal/handlers/chat_files_test.go — replaces
TestChatDownload_DockerUnavailable (stale post-rewrite) with 4
new tests:
- TestChatDownload_WorkspaceNotInDB → 404 on missing row
- TestChatDownload_NoInboundSecret → 503 on NULL column
(with RFC #2312 detail in body)
- TestChatDownload_ForwardsToWorkspace_HappyPath → forward shape
(auth header, GET method, /internal/file/read path) + headers
propagated + body byte-for-byte
- TestChatDownload_404FromWorkspacePropagated → 404 from
workspace propagates (NOT remapped to 500)
Existing TestChatDownload_InvalidPath path-safety tests preserved.
* workspace/tests/test_internal_file_read.py — 21 tests covering
_validate_path matrix (absolute, allowed roots, traversal, double-
slash, exact-match-on-root), 401 on missing/wrong/no-secret-file
bearer, 400 on missing path/outside-root/traversal, 404 on missing
file, happy-path streaming with correct Content-Type +
Content-Disposition, special-char escaping in Content-Disposition,
symlink-redirect-rejection (lstat-not-stat protection).
Test results:
* go test ./internal/handlers/ ./internal/wsauth/ — green
* pytest workspace/tests/ — 1292 passed (was 1272 before PR-D)
Refs #2312 (parent RFC), #2308 (chat upload+download 503 incident).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
PR #2265 renamed the harness trace endpoint and event name; sync the
cross-repo scripts/README.md to match.
Closes#2270
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>