From e8b5f409bedbdf01d40ac38885459981bdcf3bea Mon Sep 17 00:00:00 2001 From: "molecule-ai[bot]" <276602405+molecule-ai[bot]@users.noreply.github.com> Date: Fri, 24 Apr 2026 01:58:31 +0000 Subject: [PATCH] test(handlers): add 5 TestKI005 terminal guard regression tests (#1938) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * chore: sync staging to main — 1188 commits, 5 conflicts resolved (#1743) * fix(docs): update architecture + API reference paths for workspace-server rename Co-Authored-By: Claude Opus 4.6 (1M context) * fix: update workspace script comments for workspace-template → workspace rename Co-Authored-By: Claude Opus 4.6 (1M context) * fix: ChatTab comment path for workspace-server rename Co-Authored-By: Claude Opus 4.6 (1M context) * test: add BatchActionBar unit tests (7 tests) Covers: render threshold, count badge, action buttons, clear selection, ConfirmDialog trigger, ARIA toolbar role. Co-Authored-By: Claude Opus 4.6 (1M context) * chore: update publish workflow name + document staging-first flow Default branch is now staging for both molecule-core and molecule-controlplane. PRs target staging, CEO merges staging → main to promote to production. Co-Authored-By: Claude Opus 4.6 (1M context) * fix(ci): update working-directory for workspace-server/ and workspace/ renames - platform-build: working-directory platform → workspace-server - golangci-lint: working-directory platform → workspace-server - python-lint: working-directory workspace-template → workspace - e2e-api: working-directory platform → workspace-server - canvas-deploy-reminder: fix duplicate if: key (merged into single condition) Co-Authored-By: Claude Opus 4.6 (1M context) * chore: add mol_pk_ and cfut_ to pre-commit secret scanner Partner API keys (mol_pk_*) and Cloudflare tokens (cfut_*) now caught by the pre-commit hook alongside sk-ant-, ghp_, AKIA. Co-Authored-By: Claude Opus 4.6 (1M context) * chore(canvas): enable Turbopack for dev server — faster HMR next dev --turbopack for significantly faster dev server startup and hot module replacement. Build script unchanged (Turbopack for next build is still experimental). Co-Authored-By: Claude Opus 4.6 (1M context) * feat(db): schema_migrations tracking — migrations only run once Adds a schema_migrations table that records which migration files have been applied. On boot, only new migrations execute — previously applied ones are skipped. This eliminates: - Re-running all 33 migrations on every restart - Risk of non-idempotent DDL failing on restart - Unnecessary log noise from re-applying unchanged schema First boot auto-populates the tracking table with all existing migrations. Subsequent boots only apply new ones. Co-Authored-By: Claude Opus 4.6 (1M context) * fix(scheduler): strip CRLF from cron prompts on insert/update (closes #958) Windows CRLF in org-template prompt text caused empty agent responses and phantom-producing detection. Strips \r at the handler level before DB persist, plus a one-time migration to clean existing rows. Co-Authored-By: Claude Opus 4.6 (1M context) * fix(security): strip current_task from public GET /workspaces/:id (closes #955) current_task exposes live agent instructions to any caller with a valid workspace UUID. Also strips last_sample_error and workspace_dir from the public endpoint. These fields remain available through authenticated workspace-specific endpoints. Co-Authored-By: Claude Opus 4.6 (1M context) * chore(canvas): initialize shadcn/ui — components.json + cn utility Sets up shadcn/ui CLI so new components can be added with `npx shadcn add `. Uses new-york style, zinc base color, no CSS variables (matches existing Tailwind-only approach). Adds clsx + tailwind-merge for the cn() utility. Co-Authored-By: Claude Opus 4.6 (1M context) * fix(security): GLOBAL memory delimiter spoofing + pin MCP npm version SAFE-T1201 (#807): Escape [MEMORY prefix in GLOBAL memory content on write to prevent delimiter-spoofing prompt injection. Content stored as "[_MEMORY " so it renders as text, not structure, when wrapped with the real delimiter on read. SAFE-T1102 (#805): Pin @molecule-ai/mcp-server@1.0.0 in .mcp.json.example. Prevents supply-chain attacks via unpinned npx -y. Co-Authored-By: Claude Opus 4.6 (1M context) * test: schema_migrations tracking — 4 cases (first boot, re-boot, mixed, down.sql filter) Co-Authored-By: Claude Opus 4.6 (1M context) * test: verify current_task + last_sample_error + workspace_dir stripped from public GET Co-Authored-By: Claude Opus 4.6 (1M context) * test: GLOBAL memory delimiter spoofing escape + LOCAL scope untouched - TestCommitMemory_GlobalScope_DelimiterSpoofingEscaped: verifies [MEMORY prefix is escaped to [_MEMORY before DB insert (SAFE-T1201, #807) - TestCommitMemory_LocalScope_NoDelimiterEscape: LOCAL scope stored verbatim Co-Authored-By: Claude Opus 4.6 (1M context) * feat(security): Phase 35.1 — SG lockdown script for tenant EC2 instances Restricts tenant EC2 port 8080 ingress to Cloudflare IP ranges only, blocking direct-IP access. Supports two modes: 1. Lock to CF IPs (Worker deployment): 14 IPv4 CIDR rules 2. Close ingress entirely (Tunnel deployment): removes 0.0.0.0/0 only Usage: bash scripts/lockdown-tenant-sg.sh --sg-id sg-xxxxx bash scripts/lockdown-tenant-sg.sh --sg-id sg-xxxxx --close-ingress bash scripts/lockdown-tenant-sg.sh --sg-id sg-xxxxx --dry-run Co-Authored-By: Claude Opus 4.6 (1M context) * ci: update GitHub Actions to current stable versions (closes #780) - golangci/golangci-lint-action@v4 → v9 - docker/setup-qemu-action@v3 → v4 - docker/setup-buildx-action@v3 → v4 - docker/build-push-action@v5 → v6 Co-Authored-By: Claude Opus 4.6 (1M context) * docs(opencode): RFC 2119 — 'should not' → 'must not' for SAFE-T1201 warning (closes #861) Co-Authored-By: Claude Opus 4.6 (1M context) * fix(canvas): degraded badge WCAG AA contrast — amber-400 → amber-300 (closes #885) amber-400 on zinc-900 is 5.4:1 (AA pass). amber-300 is 6.9:1 (AA+AAA pass) and matches the rest of the amber usage in WorkspaceNode (currentTask, error detail, badge chip). Co-Authored-By: Claude Opus 4.6 (1M context) * feat(platform): 409 guard on /hibernate when active_tasks > 0 (closes #822) Phase 35.1 / #799 security condition C3 — prevents operator from accidentally killing a mid-task agent. Behavior: - active_tasks == 0 → proceed as before - active_tasks > 0 && ?force=true → log [WARN] + proceed - active_tasks > 0 && no force → 409 with {error, active_tasks} 2 new tests: TestHibernateHandler_ActiveTasks_Returns409, TestHibernateHandler_ActiveTasks_ForceTrue_Returns200. Co-Authored-By: Claude Opus 4.6 (1M context) * feat(platform): track last_outbound_at for silent-workspace detection (closes #817) Sub of #795 (phantom-busy post-mortem). Adds last_outbound_at TIMESTAMPTZ column to workspaces. Bumped async on every successful outbound A2A call from a real workspace (skip canvas + system callers). Exposed in GET /workspaces/:id response as "last_outbound_at". PM/Dev Lead orchestrators can now detect workspaces that have gone silent despite being online (> 2h + active cron = phantom-busy warning). Co-Authored-By: Claude Opus 4.6 (1M context) * feat(workspace): snapshot secret scrubber (closes #823) Sub-issue of #799, security condition C4. Standalone module in workspace/lib/snapshot_scrub.py with three public functions: - scrub_content(str) → str: regex-based redaction of secret patterns - is_sandbox_content(str) → bool: detect run_code tool output markers - scrub_snapshot(dict) → dict: walk memories, scrub each, drop sandbox entries Patterns covered: sk-ant-/sk-proj-, ghp_/ghs_/github_pat_, AKIA, cfut_, mol_pk_, ctx7_, Bearer, env-var assignments, base64 blobs ≥33 chars. 21 unit tests, 100% coverage on new code. Co-Authored-By: Claude Opus 4.6 (1M context) * fix(security): cap webhook + config PATCH bodies (H3/H4) Two HIGH-severity DoS surfaces: both handlers read the entire HTTP body with io.ReadAll(r.Body) and no upper bound, so a caller streaming a multi-gigabyte request could exhaust memory on the tenant instance before we even validated the JSON. H3 (Discord webhook): wrap Body in io.LimitReader with a 1 MiB cap. Discord Interactions payloads are well under 10 KiB in practice. H4 (workspace config PATCH): wrap Body in http.MaxBytesReader with a 256 KiB cap. Real configs are <10 KiB; jsonb handles the cap comfortably. Returns 413 Request Entity Too Large on overflow. Co-Authored-By: Claude Opus 4.7 (1M context) * fix(security): C4 — close AdminAuth fail-open race on hosted-SaaS fresh install Pre-launch review blocker. AdminAuth's Tier-1 fail-open fired whenever the workspace_auth_tokens table was empty — including the window between a hosted tenant EC2 booting and the first workspace being created. In that window, every admin-gated route (POST /org/import, POST /workspaces, POST /bundles/import, etc.) was reachable without a bearer, letting an attacker pre-empt the first real user by importing a hostile workspace into a freshly provisioned instance. Fix: fail-open is now ONLY applied when ADMIN_TOKEN is unset (self- hosted dev with zero auth configured). Hosted SaaS always sets ADMIN_TOKEN at provision time, so the branch never fires in prod and requests with no bearer get 401 even before the first token is minted. Tier-2 / Tier-3 paths unchanged. The old TestAdminAuth_684_FailOpen_AdminTokenSet_NoGlobalTokens test was codifying exactly this bug (asserting 200 on fresh install with ADMIN_TOKEN set). Renamed and flipped to TestAdminAuth_C4_AdminTokenSet_FreshInstall_FailsClosed asserting 401. Co-Authored-By: Claude Opus 4.7 (1M context) * fix(security): scrub workspace-server token + upstream error logs Two findings from the pre-launch log-scrub audit: 1. handlers/workspace_provision.go:548 logged `token[:8]` — the exact H1 pattern that panicked on short keys. Even with a length guard, leaking 8 chars of an auth token into centralized logs shortens the search space for anyone who gets log-read access. Now logs only `len(token)` as a liveness signal. 2. provisioner/cp_provisioner.go:101 fell back to logging the raw control-plane response body when the structured {"error":"..."} field was absent. If the CP ever echoed request headers (Authorization) or a portion of user-data back in an error path, the bearer token would end up in our tenant-instance logs. Now logs the byte count only; the structured error remains in place for the happy path. Also caps the read at 64 KiB via io.LimitReader to prevent log-flood DoS from a compromised upstream. Co-Authored-By: Claude Opus 4.7 (1M context) * fix(security): tenant CPProvisioner attaches CP bearer on all calls Completes the C1 integration (PR #50 on molecule-controlplane). The CP now requires Authorization: Bearer on all three /cp/workspaces/* endpoints; without this change the tenant-side Start/Stop/IsRunning calls would all 401 (or 404 when the CP's routes refused to mount) and every workspace provision from a SaaS tenant would silently fail. Reads MOLECULE_CP_SHARED_SECRET, falling back to PROVISION_SHARED_SECRET so operators can use one env-var name on both sides of the wire. Empty value is a no-op: self-hosted deployments with no CP or a CP that doesn't gate /cp/workspaces/* keep working as before. Co-Authored-By: Claude Opus 4.7 (1M context) * fix(canvas): add 15s fetch timeout on API calls Pre-launch audit flagged api.ts as missing a timeout on every fetch. A slow or hung CP response would leave the UI spinning indefinitely with no way for the user to abort — effectively a client-side DoS. 15s is long enough for real CP queries (slowest observed is Stripe portal redirect at ~3s) and short enough that a stalled backend surfaces as a clear error with a retry affordance. Uses AbortSignal.timeout (widely supported since 2023) so the abort propagates through React Query / SWR consumers cleanly. Co-Authored-By: Claude Opus 4.7 (1M context) * fix(e2e): stop asserting current_task on public workspace GET (#966) PR #966 intentionally stripped current_task, last_sample_error, and workspace_dir from the public GET /workspaces/:id response to avoid leaking task bodies to anyone with a workspace bearer. The E2E smoke test hadn't caught up — it was still asserting "current_task":"..." on the single-workspace GET, which made every post-#966 CI run fail with '60 passed, 2 failed'. Swap the per-workspace asserts to check active_tasks (still exposed, canonical busy signal) and keep the list-endpoint check that proves admin-auth'd callers still see current_task end-to-end. Co-Authored-By: Claude Opus 4.7 (1M context) * docs: 2026-04-19 SaaS prod migration notes Captures the 10-PR staging→main cutover: what shipped, the three new Railway prod env vars (PROVISION_SHARED_SECRET / EC2_VPC_ID / CP_BASE_URL), and the sharp edge for existing tenants — their containers pre-date PR #53 so they still need MOLECULE_CP_SHARED_SECRET added manually (or a re-provision) before the new CPProvisioner's outbound bearer works. Also includes a post-deploy verification checklist and rollback plan. Co-Authored-By: Claude Opus 4.7 (1M context) * feat(ws-server): pull env from CP on startup Paired with molecule-controlplane PR #55 (GET /cp/tenants/config). Lets existing tenants heal themselves when we rotate or add a CP-side env var (e.g. MOLECULE_CP_SHARED_SECRET landing earlier today) without any ssh or re-provision. Flow: main() calls refreshEnvFromCP() before any other os.Getenv read. The helper reads MOLECULE_ORG_ID + ADMIN_TOKEN from the baked-in user-data env, GETs {MOLECULE_CP_URL}/cp/tenants/config with those credentials, and applies the returned string map via os.Setenv so downstream code (CPProvisioner, etc.) sees the fresh values. Best-effort semantics: - self-hosted / no MOLECULE_ORG_ID → no-op (return nil) - CP unreachable / non-200 → log + return error (main keeps booting) - oversized values (>4 KiB each) rejected to avoid env pollution - body read capped at 64 KiB Once this image hits GHCR, the 5-minute tenant auto-updater picks it up, the container restarts, refresh runs, and every tenant has MOLECULE_CP_SHARED_SECRET within ~5 minutes — no operator toil. Also fixes workspace-server/.gitignore so `server` no longer matches the cmd/server package dir — it only ignored the compiled binary but pattern was too broad. Anchored to `/server`. Co-Authored-By: Claude Opus 4.7 (1M context) * feat(canary): smoke harness + GHA verification workflow (Phase 2) Post-deploy verification for staging tenant images. Runs against the canary fleet after each publish-workspace-server-image build — catches auto-update breakage (a la today's E2E current_task drift) before it propagates to the prod tenant fleet that auto-pulls :latest every 5 min. scripts/canary-smoke.sh iterates a space-sep list of canary base URLs (paired with their ADMIN_TOKENs) and checks: - /admin/liveness reachable with admin bearer (tenant boot OK) - /workspaces list responds (wsAuth + DB path OK) - /memories/commit + /memories/search round-trip (encryption + scrubber) - /events admin read (AdminAuth C4 path) - /admin/liveness without bearer returns 401 (C4 fail-closed regression) .github/workflows/canary-verify.yml runs after publish succeeds: - 6-min sleep (tenant auto-updater pulls every 5 min) - bash scripts/canary-smoke.sh with secrets pulled from repo settings - on failure: writes a Step Summary flagging that :latest should be rolled back to prior known-good digest Phase 3 follow-up will split the publish workflow so only :staging- ships initially, and canary-verify's green gate is what promotes :staging- → :latest. This commit lays the test gate alone so we have something running against tenants immediately. Secrets to set in GitHub repo settings before this workflow can run: - CANARY_TENANT_URLS (space-sep list) - CANARY_ADMIN_TOKENS (same order as URLs) - CANARY_CP_SHARED_SECRET (matches staging CP PROVISION_SHARED_SECRET) Co-Authored-By: Claude Opus 4.7 (1M context) * feat(canary): gate :latest tag promotion on canary verify green (Phase 3) Completes the canary release train. Before this, publish-workspace- server-image.yml pushed both :staging- and :latest on every main merge — meaning the prod tenant fleet auto-pulled every image immediately, before any post-deploy smoke test. A broken image (think: this morning's E2E current_task drift, but shipped at 3am instead of caught in CI) would have fanned out to every running tenant within 5 min. Now: - publish workflow pushes :staging- ONLY - canary tenants are configured to track :staging-; they pick up the new image on their next auto-update cycle - canary-verify.yml runs the smoke suite (Phase 2) after the sleep - on green: a new promote-to-latest job uses crane to remotely retag :staging- → :latest for both platform and tenant images - prod tenants auto-update to the newly-retagged :latest within their usual 5-min window - on red: :latest stays frozen on prior good digest; prod is untouched crane is pulled onto the runner (~4 MB, GitHub release) rather than docker-daemon retag so the workflow doesn't need a privileged runner. Rollback: if canary passed but something surfaces post-promotion, operator runs "crane tag ghcr.io/molecule-ai/platform: latest" manually. A follow-up can wrap that in a Phase 4 admin endpoint / script. Co-Authored-By: Claude Opus 4.7 (1M context) * feat(canary): rollback-latest script + release-pipeline doc (Phase 4) Closes the canary loop with the escape hatch and a single place to read about the whole flow. scripts/rollback-latest.sh uses crane to retag :latest ← :staging- for BOTH the platform and tenant images. Pre-checks the target tag exists and verifies the :latest digest after the move so a bad ops typo doesn't silently promote the wrong thing. Prod tenants auto-update to the rolled-back digest within their 5-min cycle. Exit codes: 0 = both retagged, 1 = registry/tag error, 2 = usage error. docs/architecture/canary-release.md The one-page map of the pipeline: how PR → main → staging- → canary smoke → :latest promotion works end-to-end, how to add a canary tenant, how to roll back, and what this gate explicitly does NOT catch (prod-only data, config drift, cross-tenant bugs). No code changes in the CP or workspace-server — this PR is shell + docs only, so it's safe to land independently of the other Phase {1,1.5,2,3} PRs still in review. Co-Authored-By: Claude Opus 4.7 (1M context) * test(ws-server): cover CPProvisioner — auth, env fallback, error paths Post-merge audit flagged cp_provisioner.go as the only new file from the canary/C1 work without test coverage. Fills the gap: - NewCPProvisioner_RequiresOrgID — self-hosted without MOLECULE_ORG_ID refuses to construct (avoids silent phone-home to prod CP). - NewCPProvisioner_FallsBackToProvisionSharedSecret — the operator ergonomics of using one env-var name on both sides of the wire. - AuthHeader noop + happy path — bearer only set when secret is set. - Start_HappyPath — end-to-end POST to stubbed CP, bearer forwarded, instance_id parsed out of response. - Start_Non201ReturnsStructuredError — when CP returns structured {"error":"…"}, that message surfaces to the caller. - Start_NoStructuredErrorFallsBackToSize — regression gate for the anti-log-leak change from PR #980: raw upstream body must NOT appear in the error, only the byte count. Co-Authored-By: Claude Opus 4.7 (1M context) * perf(scheduler): collapse empty-run bump to single RETURNING query The phantom-producer detector (#795) was doing UPDATE + SELECT in two roundtrips — first incrementing consecutive_empty_runs, then re- reading to check the stale threshold. Switch to UPDATE ... RETURNING so the post-increment value comes back in one query. Called once per schedule per cron tick. At 100 tenants × dozens of schedules per tenant, the halved DB traffic on the empty-response path is measurable, not just cosmetic. Also now properly logs if the bump itself fails (previously it silent- swallowed the ExecContext error and still ran the SELECT, which would confuse debugging). Co-Authored-By: Claude Opus 4.7 (1M context) * feat(canvas): /orgs landing page for post-signup users CP's Callback handler redirects every new WorkOS session to APP_URL/orgs, but canvas had no such route — new users hit the canvas Home component, which tries to call /workspaces on a tenant that doesn't exist yet, and saw a confusing error. This PR plugs that gap with a dedicated landing page that: - Bounces anonymous visitors back to /cp/auth/login - Zero-org users see a slug-picker (POST /cp/orgs, refresh) - For each existing org, shows status + CTA: * awaiting_payment → amber "Complete payment" → /pricing?org=… * running → emerald "Open" → https://.moleculesai.app * failed → "Contact support" → mailto * provisioning → read-only "provisioning…" - Surfaces errors inline with a Retry button Deliberately server-light: one GET /cp/orgs, no WebSocket, no canvas store hydration. Goal is to move the user from signup to either Stripe Checkout or their tenant URL with one click each. Closes the last UX gap between the BILLING_REQUIRED gate landing on the CP and real users being able to complete a signup today. Co-Authored-By: Claude Opus 4.7 (1M context) * feat(canvas): post-checkout UX — Stripe success lands on /orgs with banner Two small polish items that together close the signup-to-running-tenant flow for real users: 1. Stripe success_url now points at /orgs?checkout=success instead of the current page (was pricing). The old behavior left people staring at plan cards with no indication payment went through — the new behavior drops them right onto their org list where they can watch the status flip. 2. /orgs shows a green "Payment confirmed, workspace spinning up" banner when it sees ?checkout=success, then clears the query param via replaceState so a reload doesn't show it again. 3. /orgs now polls every 5s while any org is awaiting_payment or provisioning. Users see the Stripe webhook's effect live — no manual refresh needed — and once every org settles the polling stops so idle tabs don't hammer /cp/orgs. Paired with PR #992 (the /orgs page itself) this makes the end-to-end flow on BILLING_REQUIRED=true deployments feel right: /pricing → Stripe → /orgs?checkout=success → banner → live poll → "Open" button when org.status transitions to running. Co-Authored-By: Claude Opus 4.7 (1M context) * test(canvas): bump billing test for /orgs success_url * fix(ci): clone sibling plugin repo so publish-workspace-server-image builds Publish has been failing since the 2026-04-18 open-source restructure (#964's merge) because workspace-server/Dockerfile still COPYs ./molecule-ai-plugin-github-app-auth/ but the restructure moved that code out to its own repo. Every main merge since has produced a "failed to compute cache key: /molecule-ai-plugin-github-app-auth: not found" error — prod images haven't moved. Fix: add an actions/checkout step that fetches the plugin repo into the build context before docker build runs. Private-repo safe: uses PLUGIN_REPO_PAT secret (fine-grained PAT with Contents:Read on Molecule-AI/molecule-ai-plugin-github-app-auth). Falls back to the default GITHUB_TOKEN if the plugin repo is public. Ops: set repo secret PLUGIN_REPO_PAT before the next main merge, or publish will fail with a 404 on the checkout step. Also gitignores the cloned dir so local dev builds don't accidentally commit it. Co-Authored-By: Claude Opus 4.7 (1M context) * ci(promote-latest): workflow_dispatch to retag :staging- → :latest Escape hatch for the initial rollout window (canary fleet not yet provisioned, so canary-verify.yml's automatic promotion doesn't fire) AND for manual rollback scenarios. Uses the default GITHUB_TOKEN which carries write:packages on repo- owned GHCR images, so no new secrets are needed. crane handles the remote retag without pulling or pushing layers. Validates the src tag exists before retagging + verifies the :latest digest post-retag so a typo can't silently promote the wrong image. Trigger from Actions → promote-latest → Run workflow → enter the short sha (e.g. "4c1d56e"). Co-Authored-By: Claude Opus 4.7 (1M context) * ci(promote-latest): run on self-hosted mac mini (GH-hosted quota blocked) * ci(promote-latest): suppress brew cleanup that hits perm-denied on shared runner * feat(canvas): Phase 5 — credit balance pill + low-balance banner Adds the UI surface for the credit system to /orgs: - CreditsPill next to each org row. Tone shifts from zinc → amber at 10% of plan to red at zero. - LowCreditsBanner appears under the pill for running orgs when the balance crosses thresholds: overage_used > 0 → "overage active", balance <= 0 → "out of credits, upgrade", trial tail → "trial almost out". - Pure helpers extracted to lib/credits.ts so formatCredits, pillTone, and bannerKind are unit-tested without jsdom. Backend List query now returns credits_balance / plan_monthly_credits / overage_used_credits / overage_cap_credits so no second round-trip is needed. Co-Authored-By: Claude Opus 4.7 (1M context) * feat(canvas): ToS gate modal + us-east-2 data residency notice Wraps /orgs in a TermsGate that polls /cp/auth/terms-status on mount and overlays a blocking modal when the current terms version hasn't been accepted yet. "I agree" POSTs /cp/auth/accept-terms and dismisses the modal; the backend records IP + UA as GDPR Art. 7 proof-of-consent. Also adds a short data residency notice under the page header: workspaces run in AWS us-east-2 (Ohio, US). An EU region selector is a future lift once the infra is provisioned there. Co-Authored-By: Claude Opus 4.7 (1M context) * fix(scheduler): defer cron fires when workspace busy instead of skipping (#969) Previously, the scheduler skipped cron fires entirely when a workspace had active_tasks > 0 (#115). This caused permanent cron misses for workspaces kept perpetually busy by the 5-min Orchestrator pulse — work crons (pick-up-work, PR review) were skipped every fire because the agent was always processing a delegation. Measured impact on Dev Lead: 17 context-deadline-exceeded timeouts in 2 hours, ~30% of inter-agent messages silently dropped. Fix: when workspace is busy, poll every 10s for up to 2 minutes waiting for idle. If idle within the window, fire normally. If still busy after 2 min, fall back to the original skip behavior. This is a minimal, safe change: - No new goroutines or channels - Same fire path once idle - Bounded wait (2 min max, won't block the scheduler pool) - Falls back to skip if workspace never becomes idle Co-Authored-By: Claude Opus 4.6 (1M context) * fix(mcp): scrub secrets in commit_memory MCP tool path (#838 sibling) PR #881 closed SAFE-T1201 (#838) on the HTTP path by wiring redactSecrets() into MemoriesHandler.Commit — but the sibling code path on the MCP bridge (MCPHandler.toolCommitMemory) was left with only the TODO comment. Agents calling commit_memory via the MCP tool bridge are the PRIMARY attack vector for #838 (confused / prompt-injected agent pipes raw tool-response text containing plain-text credentials into agent_memories, leaking into shared TEAM scope). The HTTP path is only exercised by canvas UI posts, so the MCP gap was the hotter one. Change: workspace-server/internal/handlers/mcp.go:725 - TODO(#838): run _redactSecrets(content) before insert — plain-text - API keys from tool responses must not land in the memories table. + SAFE-T1201 (#838): scrub known credential patterns before persistence… + content, _ = redactSecrets(workspaceID, content) Reuses redactSecrets (same package) so there's no duplicated pattern list — a future-added pattern in memories.go automatically covers the MCP path too. Tests added in mcp_test.go: - TestMCPHandler_CommitMemory_SecretInContent_IsRedactedBeforeInsert Exercises three patterns (env-var assignment, Bearer token, sk-…) and uses sqlmock's WithArgs to bind the exact REDACTED form — so a regression (removing the redactSecrets call) fails with arg-mismatch rather than silently persisting the secret. - TestMCPHandler_CommitMemory_CleanContent_PassesThrough Regression guard — benign content must NOT be altered by the redactor. NOTE: unable to run `go test -race ./...` locally (this container has no Go toolchain). The change is mechanical reuse of an already-shipped function in the same package; CI must validate. The sqlmock patterns mirror the existing TestMCPHandler_CommitMemory_LocalScope_Success test exactly. Co-Authored-By: Claude Opus 4.7 * fix(ci): move canary-verify to self-hosted runner GitHub-hosted ubuntu-latest runs on this repo hit "recent account payments have failed or your spending limit needs to be increased" — same root cause as the publish + CodeQL + molecule-app workflow moves earlier this quarter. canary-verify was the last one still on ubuntu-latest. Switches both jobs to [self-hosted, macos, arm64]. crane install switched from Linux tarball to brew (matches promote-latest.yml's install pattern + avoids /usr/local/bin write perms on the shared mac mini). Co-Authored-By: Claude Opus 4.7 (1M context) * test(canvas): pin AbortSignal timeout regression + cover /orgs landing page Two independent test additions that harden the surface freshly landed on staging via PRs #982 (canvas fetch timeout), #992 (/orgs landing), #994 (post-checkout redirect to /orgs). canvas/src/lib/__tests__/api.test.ts (+74 lines, 7 new tests) - GET/POST/PATCH/PUT/DELETE each pass an AbortSignal to fetch - TimeoutError (DOMException name=TimeoutError) propagates to the caller - Each request installs its own signal — no shared module-level controller that would allow one slow request to cancel an unrelated fast one This is the hardening nit I flagged in my APPROVE-w/-nit review of fix/canvas-api-fetch-timeout. Landing as a follow-up now that #982 is in staging. canvas/src/app/__tests__/orgs-page.test.tsx (+251 lines, new file, 10 tests) - Auth guard: signed-out → redirectToLogin and no /cp/orgs fetch - Error state: failed /cp/orgs → Error message + Retry button - Empty list: CreateOrgForm renders - CTA by status: running → "Open" link targets {slug}.moleculesai.app awaiting_payment → "Complete payment" → /pricing?org= failed → "Contact support" mailto - Post-checkout: ?checkout=success renders CheckoutBanner AND history.replaceState scrubs the query param - Fetch contract: /cp/orgs called with credentials:include + AbortSignal Local baseline on origin/staging tip 845ac47: canvas vitest: 50 files / 778 tests, all green canvas build: clean, /orgs route present (2.83 kB / 105 kB first-load) Co-Authored-By: Claude Opus 4.7 * test(canvas): cover /orgs 5s polling on in-flight orgs The test docstring promised polling coverage but I'd only wired the describe-block header, not the actual tests. Closing that gap — vitest fake timers drive three cases: - `provisioning` org → 2nd fetch fires after 5.1s advance - all `running` → no 2nd fetch even after 10s advance - `awaiting_payment` org, unmount before timer fires → no post-unmount fetch (cleanup correctly clears the pollTimer) The unmount case is the meaningful one: without it a fast nav-away leaves the 5s interval chasing the CP forever. page.tsx L97-99 does clear the timer; the test pins the contract. Local baseline on origin/staging tip 845ac47 + this branch: canvas vitest: 50 files / 781 tests, all green (+3 vs prior commit) canvas build: clean Co-Authored-By: Claude Opus 4.7 * ci(codeql): cover main + staging via workflow GitHub's UI-configured "Code quality" scan only fires on the default branch (staging), which leaves every staging→main promotion PR unscanned. The "On push and pull requests to" field in the UI has no dropdown; multi-branch scanning on private repos without GHAS isn't available there. Workflow file gives us the control we can't get in the UI: triggers on push + pull_request for both branches. Runs on the same self-hosted mac mini via [self-hosted, macos, arm64]. upload: never — GHAS isn't enabled on this repo so the SARIF upload API 403s. Keep results locally, filter to error+warning severity, fail the PR check on findings, publish SARIF as a workflow artifact. Flipping upload: never → always after GHAS is enabled (if ever) is a one-line change. Picks up the review-flagged improvements from the earlier closed PR: - jq install step (brew, no assumption it's present) - severity filter (error+warning only, drops noisy note-level) - set -euo pipefail - SARIF glob (file name doesn't match matrix language id) Co-Authored-By: Claude Opus 4.7 (1M context) * fix(bundle/exporter): add rows.Err() after child workspace enumeration Silent data loss on mid-cursor DB errors — partial sub-workspace bundles returned instead of surfacing the iteration error. Adds rows.Err() check after the SELECT id FROM workspaces query in Export(), mirroring the pattern already used in scheduler.go and handlers with similar recursion patterns. Closes: R1 MISSING-ROWS-ERR findings (bundle/exporter.go) Co-Authored-By: Claude Opus 4.7 * fix(a11y): WorkspaceNode font floor, contrast, focus rings (Cycle 10) C1: skills badge spans text-[7px]→text-[10px]; "+N more" overflow text-[7px] text-zinc-500→text-[10px] text-zinc-400 C2: Team section label text-[7px] text-zinc-600→text-[10px] text-zinc-400 H4: status label text-[9px]→text-[10px]; active-tasks count text-[9px] text-amber-300/80→text-[10px] text-amber-300 (remove opacity modifier per design-system contrast rule); current-task text text-[9px] text-amber-300/70→text-[10px] text-amber-300 L1: add focus-visible:ring-2 focus-visible:ring-blue-500/70 to the Restart button (independently Tab-focusable inside role="button" wrapper) and to the Extract-from-team button in TeamMemberChip; TeamMemberChip role="button" div already has the focus ring (COVERED, no change) 762/762 tests pass · build clean Co-Authored-By: Claude Sonnet 4.6 * fix(ci): replace sleep 360 with health-check poll in canary-verify (#1013) The canary-verify workflow blocked the self-hosted runner for a fixed 6 minutes regardless of whether canaries had already updated. This wastes the runner slot when canaries update in 2-3 minutes. Fix: poll each canary's /health endpoint every 30s for up to 7 min. Exit early when all canaries report the expected SHA. Falls back to proceeding after timeout — the smoke suite validates regardless. Typical time saving: ~3-4 minutes per canary verify run. Co-Authored-By: Claude Opus 4.6 (1M context) * fix(gate-1): remove unused fireEvent import (#1011) Mechanical lint fix. github-code-quality[bot] flagged unused import on line 18 — fireEvent is imported but never referenced in the test file. Removing it clears the code quality gate without changing any test behaviour. Co-Authored-By: Claude Opus 4.7 * feat: event-driven cron triggers + auto-push hook for agent productivity Three changes to boost agent throughput: 1. Event-driven cron triggers (webhooks.go): GitHub issues/opened events fire all "pick-up-work" schedules immediately. PR review/submitted events fire "PR review" and "security review" schedules. Uses next_run_at=now() so the scheduler picks them up on next tick. 2. Auto-push hook (executor_helpers.py): After every task completion, agents automatically push unpushed commits and open a PR targeting staging. Guards: only on non-protected branches with unpushed work. Uses /usr/local/bin/git and /usr/local/bin/gh wrappers with baked-in GH_TOKEN. Never crashes the agent — all errors logged and continued. 3. Integration (claude_sdk_executor.py): auto_push_hook() called in the _execute_locked finally block after commit_memory. Closes productivity gap where agents wrote code but never pushed, and where work crons only fired on timers instead of reacting to events. Co-Authored-By: Claude Opus 4.6 (1M context) * fix: disable schedules when workspace is deleted (#1027) When a workspace is deleted (status set to 'removed'), its schedules remained enabled, causing the scheduler to keep firing cron jobs for non-existent containers. Add a cascade disable query alongside the existing token revocation and canvas layout cleanup. Co-Authored-By: Claude Opus 4.6 (1M context) * fix: stop hardcoding CLAUDE_CODE_OAUTH_TOKEN in required_env (#1028) The provisioner was unconditionally writing CLAUDE_CODE_OAUTH_TOKEN into config.yaml's required_env for all claude-code workspaces. When the baked token expired, preflight rejected every workspace — even those with a valid token injected via the secrets API at runtime. Changes: - workspace_provision.go: remove hardcoded required_env for claude-code and codex runtimes; tokens are injected at container start via secrets - workspace_provision_test.go: flip assertion to reject hardcoded token Co-Authored-By: Claude Opus 4.6 (1M context) * test: add cascade schedule disable tests for #1027 - TestWorkspaceDelete_DisablesSchedules — leaf workspace delete disables its schedules - TestWorkspaceDelete_CascadeDisablesDescendantSchedules — parent+child+grandchild cascade - TestWorkspaceDelete_ScheduleDisableOnlyTargetsDeletedWorkspace — negative test Co-Authored-By: Claude Opus 4.6 (1M context) * fix: multiple platform handler bug fixes - secrets.go: Log RowsAffected errors instead of silently discarding them - a2a_proxy.go: Add 60s safety timeout to a2aClient HTTP client - terminal.go: Fix defer ordering - always close WebSocket conn on error, only defer resp.Close() after successful exec attach - webhooks.go: Add shortSHA() helper to safely handle empty HeadSHA Co-Authored-By: Claude Opus 4.7 * feat(runtime): inject HMA memory instructions at platform level (#1047) Every agent now gets hierarchical memory instructions in their system prompt automatically — no template configuration needed. Instructions cover commit_memory (LOCAL/TEAM/GLOBAL scopes), recall_memory, and when to use each proactively. Follows the same pattern as A2A instructions: defined in executor_helpers.py, injected by _build_system_prompt() in the claude_sdk_executor. Co-Authored-By: Claude Opus 4.6 (1M context) * feat: seed initial memories from org template and create payload (#1050) Add MemorySeed model and initial_memories support at three levels: - POST /workspaces payload: seed memories on workspace creation - org.yaml workspace config: per-workspace initial_memories with defaults fallback - org.yaml global_memories: org-wide GLOBAL scope memories seeded on the first root workspace during import Co-Authored-By: Claude Opus 4.6 (1M context) * feat(template): restructure molecule-dev org template to 39-agent hierarchy Comprehensive rewrite of the Molecule AI dev team org template: - Rename agents to {team}-{role} convention (e.g., core-be, cp-lead, app-qa) - Add 5 new team leads: Core Platform Lead, Controlplane Lead, App & Docs Lead, Infra Lead, SDK Lead - Add new roles: Release Manager, Integration Tester, Technical Writer, Infra-SRE, Infra-Runtime-BE, SDK-Dev, Plugin-Dev - Delete triage-operator and triage-operator-2 (leads own triage now) - Set default model to MiniMax-M2.7, tier 3, idle_interval_seconds 900 - Update org.yaml category_routing to new agent names - Add orchestrator-pulse schedules for all leads (*/5 cron) - Add pick-up-work schedules for engineers (*/15 cron) - Add qa-review schedules for QA agents (*/15 cron) - Add security-scan schedules for security agents (*/30 cron) - Add release-cycle and e2e-test schedules for Release Manager and Integration Tester - Update marketing agents with web search MCP and media generation capabilities - All schedule prompts reference Molecule-AI/internal for PLAN.md and known-issues.md - Un-ignore org-templates/molecule-dev/ in .gitignore for version tracking Co-Authored-By: Claude Opus 4.6 (1M context) * Fix test assertions to account for HMA instructions in system prompt Mock get_hma_instructions in exact-match tests so they don't break when HMA content is appended. Add a dedicated test for HMA inclusion. Co-Authored-By: Claude Opus 4.6 (1M context) * chore: gitignore org-templates/ and plugins/ entirely These directories are cloned from their standalone repos (molecule-ai-org-template-*, molecule-ai-plugin-*) and should never be committed to molecule-core directly. Removed the !/org-templates/molecule-dev/ exception that allowed PR #1056 to land template files in the wrong repo. Co-Authored-By: Claude Opus 4.6 (1M context) * fix(workspace-server): send X-Molecule-Admin-Token on CP calls controlplane #118 + #130 made /cp/workspaces/* require a per-tenant admin_token header in addition to the platform-wide shared secret. Without it, every workspace provision / deprovision / status call now 401s. ADMIN_TOKEN is already injected into the tenant container by the controlplane's Secrets Manager bootstrap, so this is purely a header-plumbing change — no new config required on the tenant side. ## Change - CPProvisioner carries adminToken alongside sharedSecret - New authHeaders method sets BOTH auth headers on every outbound request (old authHeader deleted — single call site was misleading once the semantics changed) - Empty values on either header are no-ops so self-hosted / dev deployments without a real CP still work ## Tests Renamed + expanded cp_provisioner_test cases: - TestAuthHeaders_NoopWhenBothEmpty — self-hosted path - TestAuthHeaders_SetsBothWhenBothProvided — prod happy path - TestAuthHeaders_OnlyAdminTokenWhenSecretEmpty — transition window Full workspace-server suite green. ## Rollout Next tenant provision will ship an image with this commit merged. Existing tenants (none in prod right now — hongming was the only one and was purged earlier today) will auto-update via the 5-min image-pull cron. Co-Authored-By: Claude Opus 4.7 (1M context) * fix: GitHub token refresh — add WorkspaceAuth path for credential helper (#1068) PR #729 tightened AdminAuth to require ADMIN_TOKEN, breaking the workspace credential helper which called /admin/github-installation-token with a workspace bearer token. Tokens expired after 60 min with no refresh. Fix: Add /workspaces/:id/github-installation-token under WorkspaceAuth so any authenticated workspace can refresh its GitHub token. Keep the admin path as backward-compatible alias. Update molecule-git-token-helper.sh to use the workspace-scoped path when WORKSPACE_ID is set. Co-Authored-By: Claude Opus 4.6 (1M context) * test(workspace-server): cover Stop/IsRunning/Close + auth-header + transport errors Closes review gap: pre-PR coverage on CPProvisioner was 37%. After this commit every exported method is exercised: - NewCPProvisioner 100% - authHeaders 100% - Start 91.7% (remainder: json.Marshal error path, unreachable with fixed-type request struct) - Stop 100% (new — header + path + error) - IsRunning 100% (new — 4-state matrix + auth) - Close 100% (new — contract no-op) New cases assert both auth headers (shared secret + admin_token) land on every outbound request, transport failures surface clear errors on Start/Stop, and IsRunning doesn't misreport on transport failure. Co-Authored-By: Claude Opus 4.7 (1M context) * fix(workspace-server): IsRunning surfaces non-2xx + JSON errors Pre-existing silent-failure path: IsRunning decoded CP responses regardless of HTTP status, so a CP 500 → empty body → State="" → returned (false, nil). The sweeper couldn't distinguish "workspace stopped" from "CP broken" and would leave a dead row in place. ## Fix - Non-2xx → wrapped error, does NOT echo body (CP 5xx bodies may contain echoed headers; leaking into logs would expose bearer) - JSON decode error → wrapped error - Transport error → now wrapped with "cp provisioner: status:" prefix for easier log grepping ## Tests +7 cases (5-status table + malformed JSON + existing transport). IsRunning coverage 100%; overall cp_provisioner at 98%. Co-Authored-By: Claude Opus 4.7 (1M context) * fix(cp_provisioner): IsRunning returns (true, err) on transient failures My #1071 made IsRunning return (false, err) on all error paths, but that breaks a2a_proxy which depends on Docker provisioner's (true, err) contract. Without this fix, any brief CP outage causes a2a_proxy to mark workspaces offline and trigger restart cascades across every tenant. Contract now matches Docker.IsRunning: transport error → (true, err) — alive, degraded signal non-2xx response → (true, err) — alive, degraded signal JSON decode error → (true, err) — alive, degraded signal 2xx state!=running → (false, nil) 2xx state==running → (true, nil) healthsweep.go is also happy with this — it skips on err regardless. Adds TestIsRunning_ContractCompat_A2AProxy as regression guard that asserts each error path explicitly against the a2a_proxy expectations. Co-Authored-By: Claude Opus 4.7 (1M context) * fix(cp_provisioner): cap IsRunning body read at 64 KiB IsRunning used an unbounded json.NewDecoder(resp.Body).Decode on CP status responses. Start already caps its body read at 64 KiB (cp_provisioner.go:137) to defend against a misconfigured or compromised CP streaming a huge body and exhausting memory. IsRunning is called reactively per-request from a2a_proxy and periodically from healthsweep, so it's a hotter path than Start and arguably deserves the same defense more. Adds TestIsRunning_BoundedBodyRead that serves a body padded past the cap and asserts the decode still succeeds on the JSON prefix. Follow-up to code-review Nit-2 on #1073. Co-Authored-By: Claude Opus 4.7 (1M context) * feat(canvas): /waitlist page with contact form Adds the user-facing half of the beta-gate: a page at /waitlist that the CP auth callback redirects users to when their email isn't on the allowlist. Collects email + optional name + use-case and POSTs to /cp/waitlist/request (backend landed in controlplane #150). ## Behavior - No auto-pre-fill of email from URL query (CP's #145 dropped the ?email= param for the privacy reason; this test guards against a future regression on the client side). - Client-side validates email shape for instant feedback; backend re-validates. - Three UI states after submit: success → "your request is in" banner, form hidden dedup → softer "already on file" banner when backend returns dedup=true (same 200, no 409 to avoid enumeration) error → inline banner with backend message or network fallback ## Tests 9 tests in __tests__/waitlist-page.test.tsx covering: - default render + a11y (role=button, role=status, role=alert) - URL-pre-fill privacy regression guard - HTML5 + JS validation (empty, malformed) - successful POST with trimmed body - dedup branch - non-2xx with + without error field - network rejection Follow-up to the beta-gate rollout on controlplane #145 / #150. Co-Authored-By: Claude Opus 4.7 (1M context) * chore(canvas): remove dead /waitlist page (lives in molecule-app) #1080 added /waitlist to canvas, but canvas isn't served at app.moleculesai.app — it backs the tenant subdomains (acme.moleculesai.app etc.). The real /waitlist lives in the separate molecule-app repo, which is what the CP auth callback redirects to. molecule-app#12 has the real page + contact form wiring to /cp/waitlist/request. This canvas copy was never reachable and would only diverge. Co-Authored-By: Claude Opus 4.7 (1M context) * fix(org-import): limit concurrent Docker provisioning to 3 (#1084) The org import fired all workspace provisioning goroutines concurrently, overwhelming Docker when creating 39+ containers. Containers timed out, leaving workspaces stuck in 'provisioning' with no schedules or hooks. Fix: - Add provisionConcurrency=3 semaphore limiting concurrent Docker ops - Increase workspaceCreatePacingMs from 50ms to 2000ms between siblings - Pass semaphore through createWorkspaceTree recursion With 39 workspaces at 3 concurrent + 2s pacing, import takes ~30s instead of timing out. Each workspace gets its full template: schedules, hooks, settings, hierarchy. Co-Authored-By: Claude Opus 4.6 (1M context) * fix: add ?purge=true hard-delete to DELETE /workspaces/:id (#1087) Soft-delete (status='removed') leaves orphan DB rows and FK data forever. When ?purge=true is passed, after container cleanup the handler cascade- deletes all leaf FK tables and hard-removes the workspace row. Co-Authored-By: Claude Opus 4.6 (1M context) * chore: remove org-templates/molecule-dev from git tracking This directory belongs in the dedicated repo Molecule-AI/molecule-ai-org-template-molecule-dev. It should be cloned locally for platform mounting, never committed to molecule-core. The .gitignore already blocks it. Co-Authored-By: Claude Opus 4.6 (1M context) * fix(canvas): add NEXT_PUBLIC_ADMIN_TOKEN + CSP_DEV_MODE to docker-compose Canvas needs AdminAuth token to fetch /workspaces (gated since PR #729) and CSP_DEV_MODE to allow cross-port fetches in local Docker. These were added earlier but lost on nuke+rebuild because they weren't committed to staging. Co-Authored-By: Claude Opus 4.6 (1M context) * fix(canvas): CSP_DEV_MODE + admin token for local Docker (#1052 follow-up) Three changes that keep getting lost on nuke+rebuild: 1. middleware.ts: read CSP_DEV_MODE env to relax CSP in local Docker 2. api.ts: send NEXT_PUBLIC_ADMIN_TOKEN header (AdminAuth on /workspaces) 3. Dockerfile: accept NEXT_PUBLIC_ADMIN_TOKEN as build arg All three are required for the canvas to work in local Docker where canvas (port 3000) fetches from platform (port 8080) cross-origin. Co-Authored-By: Claude Opus 4.6 (1M context) * fix(canvas): make root layout dynamic so CSP nonce reaches Next scripts Tenant page loads were failing with repeated CSP violations: Executing inline script violates ... script-src 'self' 'nonce-M2M4YTVh...' 'strict-dynamic'. ... because Next.js's bootstrap inline scripts were emitted without a nonce attribute. The middleware was generating per-request nonces correctly and sending them via `x-nonce` — but the layout was fully static, so Next.js cached the HTML once and served that cached bundle (no nonces baked in) for every request. Fix: call `await headers()` in the root layout. That opts the tree into dynamic rendering AND signals Next.js to propagate the x-nonce value to its own generated