Commit Graph

3453 Commits

Author SHA1 Message Date
github-actions[bot]
5973c9bd63
Merge pull request #2356 from Molecule-AI/staging
staging → main: auto-promote b5bde03
2026-04-29 23:07:29 -07:00
Hongming Wang
b5bde0399a
Merge pull request #2354 from Molecule-AI/auto/issue-2339-pr3-activity-cursor
feat(activity): since_id cursor on GET /activity (#2339 PR 3)
2026-04-30 05:55:05 +00:00
Hongming Wang
a81b0e1e3d feat(activity): since_id cursor on GET /activity (#2339 PR 3)
Telegram getUpdates / Slack RTM shape: poll-mode workspaces pass the id
of the last activity_logs row they consumed, server returns rows
strictly after in chronological (ASC) order. Existing callers that don't
pass since_id keep DESC + most-recent-N — backwards-compatible.

Cursor lookup is scoped by workspace_id so a caller cannot enumerate or
peek at another workspace's events by passing a UUID belonging to a
different workspace. Cross-workspace and pruned cursors both return
410 Gone — no information leak (caller cannot distinguish "row never
existed" from "row exists but you can't see it").

since_id + since_secs both apply (AND). When since_id is set the order
flips to ASC because polling consumers need recorded-order; the recent-
feed shape (no since_id) keeps DESC.

Tests:
- TestActivityHandler_SinceID_ReturnsNewerASC — cursor lookup → main
  query with cursorTime + ASC ordering.
- TestActivityHandler_SinceID_CursorNotFound_410 — pruned/unknown cursor.
- TestActivityHandler_SinceID_CrossWorkspaceCursor_410 — UUID belongs to
  another workspace, scoped lookup hides it (same 410 path, no leak).
- TestActivityHandler_SinceID_CombinedWithSinceSecs — placeholder index
  arithmetic with both filters.

Stacked on #2353 (PR 2: poll-mode short-circuit).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 22:51:52 -07:00
github-actions[bot]
c140ad28ae
Merge pull request #2352 from Molecule-AI/staging
staging → main: auto-promote 0b83faa
2026-04-29 22:48:04 -07:00
Hongming Wang
706a388806
Merge pull request #2353 from Molecule-AI/auto/issue-2339-pr2-poll-shortcircuit-v2
feat(a2a): poll-mode short-circuit in ProxyA2A (#2339 PR 2)
2026-04-30 05:29:03 +00:00
Hongming Wang
91a1d5377d feat(a2a): poll-mode short-circuit in ProxyA2A (#2339 PR 2)
Skip SSRF/dispatch and queue to activity_logs for delivery_mode=poll
workspaces. The polling agent (e.g. molecule-mcp-claude-channel on an
operator's laptop) consumes via GET /activity?since_id= in PR 3 — no
public URL needed.

Order: budget -> normalize -> lookupDeliveryMode short-circuit ->
resolveAgentURL. Normalizing before the short-circuit keeps the
JSON-RPC method name on the activity_logs row so the polling agent
can dispatch correctly.

Fail-closed-to-push: any DB error reading delivery_mode defaults to
push (loud + recoverable) rather than poll (silent drop).

Tests:
- TestProxyA2A_PollMode_ShortCircuits_NoSSRF_NoDispatch — core invariant:
  no resolveAgentURL, no Do(), records to activity_logs, returns 200
  {status:"queued",delivery_mode:"poll",method:"message/send"}.
- TestProxyA2A_PushMode_NoShortCircuit — push path unaffected; the agent
  server actually receives the request.
- TestProxyA2A_PollMode_FailsClosedToPush — DB error on mode lookup
  must NOT silently queue; falls through to the push path.

Stacked on #2348 (PR 1: schema + register flow).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 22:22:28 -07:00
Hongming Wang
3da2392f95
Merge pull request #2348 from Molecule-AI/auto/issue-2339-pr1-delivery-mode
feat(workspaces): delivery_mode column + poll-mode register flow (#2339 PR 1)
2026-04-30 05:18:03 +00:00
Hongming Wang
ec6e47cbe3
Merge pull request #2351 from Molecule-AI/auto/issue-2344-architecture-lint
test(arch): codify 4 module boundaries as architecture tests (#2344)
2026-04-30 05:16:09 +00:00
Hongming Wang
68f18424f5 test(arch): codify 4 module boundaries as architecture tests (#2344)
Hard gate #4: codified module boundaries as Go tests, so a new
contributor (or AI agent) can't silently land an import that crosses
a layer.

Boundaries enforced (one architecture_test.go per package):

- wsauth has no internal/* deps — auth leaf, must be unit-testable in
  isolation
- models has no internal/* deps — pure-types leaf, reverse dep would
  create cycles since most packages depend on models
- db has no internal/* deps — DB layer below business logic, must be
  testable with sqlmock without spinning up handlers/provisioner
- provisioner does not import handlers or router — unidirectional
  layering: handlers wires provisioner into HTTP routes; the reverse
  is a cycle

Each test parses .go files in its package via go/parser (no x/tools
dep needed) and asserts forbidden import paths don't appear. Failure
messages name the rule, the offending file, and explain WHY the
boundary exists so the diff reviewer learns the rule.

Note: the original issue's first two proposed boundaries
(provisioner-no-DB, handlers-no-docker) don't match the codebase
today — provisioner already imports db (PR #2276 runtime-image
lookup) and handlers hold *docker.Client directly (terminal,
plugins, bundle, templates). I picked the four boundaries that
actually hold; the first two are aspirational and would need a
refactor before they could be codified.

Hand-tested by injecting a deliberate wsauth -> orgtoken violation:
the gate fires red with the rule message before merge.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 22:12:58 -07:00
Hongming Wang
664bbd8899
Merge pull request #2350 from Molecule-AI/auto/issue-2342-continuous-synth-e2e
ci: continuous synthetic E2E against staging (#2342)
2026-04-30 05:08:35 +00:00
Hongming Wang
0b83faa33c
Merge pull request #2349 from Molecule-AI/auto/issue-2345-a2a-v02-compat-clean
fix(a2a): v0.2 → v0.3 compat shim at proxy edge (#2345)
2026-04-30 05:05:04 +00:00
Hongming Wang
db5d11ffca ci: continuous synthetic E2E against staging (#2342)
Hard gate Tier 2 item 2 of 4. Cron-driven full-lifecycle E2E that
catches regressions visible only at runtime — schema drift,
deployment-pipeline gaps, vendor outages, env-var rotations,
DNS / CF / Railway side-effects.

Empirical motivation from today:
  - #2345 (A2A v0.2 silent drop) — passed unit tests, broke at JSON-RPC
    parse layer between sender + receiver. Visible only when a sender
    exercises the full path. Now-fixed by PR #2349, but a continuous
    E2E would have surfaced it within 20 min of the regression.
  - RFC #2312 chat upload — landed staging-branch but never reached
    staging tenants because publish-workspace-server-image was main-
    only. Caught by manual dogfooding hours after deploy. Same pattern.

Both classes are invisible to PR-time CI. The continuous gate fires
every 20 min against a real staging tenant and surfaces regressions
within minutes.

Cadence: cron `0,20,40 * * * *` (3x/hour). Offsets the existing
sweep-cf-orphans (:15) and sweep-cf-tunnels (:45) so the three ops
don't burst CF/AWS APIs at the same minute. Concurrency group
prevents overlapping runs if one hangs.

Cost: ~$0.50-1/day GHA + pennies of staging tenant lifecycle.

Reuses existing tests/e2e/test_staging_full_saas.sh — no new harness
to maintain. Bounded at 10 min wall-clock (vs 15 min default) so
stuck runs fail fast rather than holding up the next firing.
Defaults to E2E_RUNTIME=langgraph (fastest cold start; the regression
classes this gate catches don't need hermes-specific paths). Operators
can dispatch with runtime=hermes when they want SDK-native coverage.

Schedule-vs-dispatch hardening: hard-fail on missing
CP_STAGING_ADMIN_API_TOKEN for cron firing (silent-skip would mask
real outages); soft-skip for operator dispatch.

Refs:
  - #2342 hard-gates Tier 2 item 2
  - #2345 (A2A v0.2 fix that this gate would have caught earlier)
  - #2335 / #2337 (deployment-pipeline gaps that this gate also catches)
2026-04-29 22:04:57 -07:00
Hongming Wang
140fc5fb10 fix(a2a): v0.2 → v0.3 compat shim at proxy edge (#2345)
Closes #2345.

## Symptom

Design Director silently dropped A2A briefs whose sender used the
v0.2 message format (`params.message.content` string) instead of v0.3
(`params.message.parts` part-list). The downstream a2a-sdk's v0.3
Pydantic validator rejected with "params.message.parts — Field
required" but the rejection only landed in tenant-side logs; the
sender saw HTTP 200/202 and assumed delivery.

UX Researcher therefore never received the kickoff. Multi-agent
pipeline silently idle.

## Fix

Convert at the proxy edge in normalizeA2APayload. Two cases handled,
one explicitly rejected:

  v0.2 string content   → wrap as [{kind: text, text: <content>}]
                          (the canonical v0.2 case from the dogfooding
                          report)
  v0.2 list content     → preserve list as parts (some older clients
                          put a list under `content`; treat as "client
                          meant parts, used wrong field name")
  v0.3 parts present    → no-op (hot path for normal traffic)
  Neither present       → return HTTP 400 with structured JSON-RPC
                          error pointing at the missing field

Why at the proxy edge: every workspace gets the compat for free
without each one bumping a2a-sdk separately. The SDK's own compat
adapter is strict about `parts` and rejects v0.2 senders.

Why reject loud on missing-both: pre-fix the SDK's Pydantic
rejection was post-handler-dispatch and invisible to the original
sender. Now misshapen payloads return a structured 400 to the actual
caller — kills the entire silent-drop class for this payload-shape
category.

## Tests

7 new cases on normalizeA2APayload (#2345) + 1 fixture update on the
existing _MissingMethodReturnsEmpty test:

  TestNormalizeA2APayload_ConvertsV02StringContentToParts
  TestNormalizeA2APayload_ConvertsV02ListContentToParts
  TestNormalizeA2APayload_PreservesV03Parts (hot path)
  TestNormalizeA2APayload_RejectsMessageWithNeitherContentNorParts
  TestNormalizeA2APayload_RejectsContentWithUnsupportedType
  TestNormalizeA2APayload_NoMessageNoCheck (e.g. tasks/list bypasses)

All 11 normalizeA2APayload tests pass + full handler suite (no
regressions).

## Refs

Hard-gates discussion: this is exactly the class of failure
(silent-drop on schema mismatch) that #2342 (continuous synthetic
E2E) would catch automatically. Tier 2 RFC item from #2345 (caller
gets structured JSON-RPC error on parse failure) is delivered above
via the loud-reject path.
2026-04-29 22:01:41 -07:00
github-actions[bot]
e284168c47
Merge pull request #2347 from Molecule-AI/staging
staging → main: auto-promote 21ed74c
2026-04-29 21:53:13 -07:00
Hongming Wang
d5b00d6ac1 feat(workspaces): delivery_mode column + poll-mode register flow (#2339 PR 1)
Adds workspaces.delivery_mode (push, default | poll) and lets the register
handler accept poll-mode workspaces with no URL. This is the foundation
for the unified poll/push delivery design in #2339 — Telegram-getUpdates
shape for external runtimes that have no public URL.

What this PR does:

  - Migration 045: NOT NULL TEXT column, default 'push', CHECK constraint
    on the two valid values.
  - models.Workspace + RegisterPayload + CreateWorkspacePayload gain a
    DeliveryMode field. RegisterPayload.URL drops the `binding:"required"`
    tag — the handler now enforces it conditionally on the resolved mode.
  - Register handler: validates explicit delivery_mode if set; resolves
    effective mode (payload value, else stored row value, else push) AFTER
    the C18 token check; validates URL only when effective mode is push;
    persists delivery_mode in the upsert; returns it in the response;
    skips URL caching when payload.URL is empty.
  - CreateWorkspace handler: persists delivery_mode (defaults to push) in
    the same INSERT, validates it before any side effects.

What this PR does NOT do (intentional, follow-up PRs):

  - PR 2: short-circuit ProxyA2A for poll-mode workspaces (skip SSRF +
    dispatch, log a2a_receive activity, return 200).
  - PR 3: since_id cursor on GET /activity for lossless polling.
  - Plugin v0.2 in molecule-mcp-claude-channel: cursor persistence + a
    register helper that creates poll-mode workspaces.

Backwards compatibility: every existing workspace stays push-mode (schema
default) with identical behavior. New tests:
TestRegister_PollMode_AcceptsEmptyURL,
TestRegister_PushMode_RejectsEmptyURL,
TestRegister_InvalidDeliveryMode,
TestRegister_PollMode_PreservesExistingValue. All existing register +
create tests updated to expect the new delivery_mode column in the
INSERT args.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 21:47:14 -07:00
Hongming Wang
21ed74c76a
Merge pull request #2340 from Molecule-AI/auto/issue-2332-capabilities-preamble
feat(prompt): Platform Capabilities preamble at top of system prompt (#2332 item 1)
2026-04-30 04:33:31 +00:00
Hongming Wang
4299475746 feat(prompt): Platform Capabilities preamble at top of system prompt
Closes #2332 item 1 (workspace awareness — agents don't surface
platform-native tools up front).

The dogfooding session surfaced that agents weren't using A2A
delegation, persistent memory, or send_message_to_user. The tools
were registered AND documented in the system prompt — but only in
sections #8 (Inter-Agent Communication) and #9 (Hierarchical Memory),
which agents read AFTER they've already started reasoning about a
plan from earlier sections.

This adds a tight inventory at section #1.5 (immediately after
Platform Instructions, before role-specific prompt files) — every
tool name + its short description in a bulleted block. Detailed
when_to_use docs in sections #8/#9 stay; this preamble is the
elevator pitch ("you have these"), the later sections are the
manual ("here's when and how").

Generated from `platform_tools.registry` ToolSpecs — every tool's
`name` + `short` flow through automatically, no manual sync. A new
`get_capabilities_preamble(mcp: bool)` helper in executor_helpers
mirrors the existing get_a2a_instructions / get_hma_instructions
pattern.

CLI-runtime agents (mcp=False) get an empty preamble — they see
_A2A_INSTRUCTIONS_CLI's hand-written subcommand vocabulary further
down, and the registry's MCP tool names would conflict.

Tests:
  - test_capabilities_preamble_appears_in_mcp_prompt: header present
  - test_capabilities_preamble_lists_every_registry_tool: every
    a2a + memory tool from registry shows up (drift catches at test
    time — adding a new tool to registry surfaces here automatically)
  - test_capabilities_preamble_precedes_prompt_files: ordering
    invariant (toolkit before role docs)
  - test_capabilities_preamble_skipped_for_cli_runtime: empty when
    mcp=False

All 40 prompt + platform_tools tests pass.
2026-04-29 21:31:13 -07:00
github-actions[bot]
c62d2ac50c
Merge pull request #2264 from Molecule-AI/staging
staging → main: auto-promote 9d159f0
2026-04-30 04:29:11 +00:00
Hongming Wang
856ff89973
Merge pull request #2338 from Molecule-AI/auto/redeploy-main-concurrency-parity
ci: add concurrency block to redeploy-tenants-on-main for parity
2026-04-30 04:16:53 +00:00
Hongming Wang
360361a0ce ci: add concurrency block to redeploy-tenants-on-main for parity
Parity with #2337's redeploy-tenants-on-staging.yml. Both prod and
staging redeploys now have explicit serialization:

  group: redeploy-tenants-on-main          (per-workflow, global)
  group: redeploy-tenants-on-staging       (per-workflow, global)

cancel-in-progress: false on both — aborting a half-rolled-out fleet
would leave tenants stuck on whatever image they happened to be on
when cancelled. Better to finish the in-flight rollout before starting
the next one.

Pre-fix this workflow relied on GitHub's implicit workflow_run queueing,
which is "probably fine" but not defensible — explicit > implicit for
load-bearing pipeline behavior. Picked up as a #2337 review nit
(architecture finding 1: concurrency asymmetry between the two
redeploy workflows).

No behavior change in the common case. The change matters only when
two main pushes land within seconds AND the first redeploy is still
mid-rollout — currently rare; will become more common once #2335
(staging-trigger publish) feeds main more frequently via auto-promote.
2026-04-29 21:14:41 -07:00
Hongming Wang
b8246d54a7
Merge pull request #2337 from Molecule-AI/auto/cicd-followups-concurrency-staging-redeploy
ci: serialize publish + auto-redeploy staging tenants (#2336 follow-ups)
2026-04-30 04:14:00 +00:00
Hongming Wang
b7291e006b ci: serialize publish + auto-redeploy staging tenants
Two follow-ups from #2335 review (tracked in #2336):

1. Add `concurrency:` block to publish-workspace-server-image.yml so
   two rapid staging pushes don't race the same :staging-latest retag.
   Group is per-branch (`${{ github.ref }}`) so staging and main can
   build in parallel — they produce different :staging-<sha> tags and
   last-write-wins on :staging-latest is acceptable across branches.
   `cancel-in-progress: false` keeps in-flight builds — partially-pushed
   images would break canary-fleet pin consistency.

2. Add redeploy-tenants-on-staging.yml. After #2335, every staging push
   produces a fresh :staging-latest, but existing tenants only pick it
   up on next reprovision. This workflow mirrors redeploy-tenants-on-
   main but for staging:
   - workflow_run-gated to branches: [staging]
   - target_tag default 'staging-latest' (vs 'latest' for prod)
   - CP_URL default https://staging-api.moleculesai.app
   - CP_STAGING_ADMIN_API_TOKEN repo secret (operator must set)
   - canary_slug empty by default — staging is itself the canary; no
     sub-canary needed inside it. Soak still applies if operator
     specifies a tenant for blast-radius control.

   Schedule-vs-dispatch hardening matches sweep-cf-orphans/sweep-cf-
   tunnels: hard-fail on auto-trigger when secret missing so misconfig
   doesn't silently leave staging tenants on stale code; soft-skip on
   operator dispatch.

Operator action required after merge:
  Add CP_STAGING_ADMIN_API_TOKEN repo secret. Pull value from staging-
  CP's CP_ADMIN_API_TOKEN env in Railway controlplane / staging
  environment. Until set, the auto-trigger will fail the workflow run
  (visible as red CI), surfacing the misconfiguration. Workflow runs
  only on staging publish-workspace-server-image success, so no extra
  load while it sits unconfigured.

Verification:
- YAML lint clean on both workflows.
- Reviewed redeploy-tenants-on-main as template; differences are scoped
  to staging-specific values (URL, tag, secret name) + harden-on-missing-
  secret pattern.

Refs #2335, #2336.
2026-04-29 21:11:45 -07:00
Hongming Wang
f21cb54ae4
Merge pull request #2335 from Molecule-AI/auto/publish-images-on-staging
ci: build workspace-server image on staging push (root-cause fix for stale staging tenants)
2026-04-30 04:03:18 +00:00
Hongming Wang
2e1cef324b ci: trigger publish-workspace-server-image on staging push too
Root cause: this workflow only triggered on `branches: [main]`, but
staging-CP pins TENANT_IMAGE=:staging-latest (verified via Railway).
:staging-latest was only retagged on main push, so:

  staging-branch code → never built → never reaches staging tenants
  staging-CP serves   → "yesterday's main" indefinitely

When staging→main was wedged (path-filter parity bug, canvas teardown
race — both fixed earlier today), :staging-latest stopped updating
entirely. RFC #2312 (chat upload HTTP-forward) landed on staging but
freshly-provisioned staging tenants kept failing chat upload because
they pulled pre-RFC-#2312 image. Verified by tearing down a fresh
tenant and observing the legacy "workspace container not running"
error from the docker-exec code path that RFC #2312 deleted.

Pre-2026-04-24 there was a related-but-different incident: TENANT_IMAGE
was a static :staging-<sha> pin that drifted 10 days behind. This new
incident is "the dynamic pin still drifts when its update workflow
doesn't fire."

Fix: add `staging` to the branches trigger. Tag policy is unchanged
(:staging-<sha> + :staging-latest on every push). canary-verify.yml
still runs on main push (workflow_run-gated to `branches: [main]`),
preserving the canary-verified :latest promotion for prod tenants.

Steady state after this:
  - staging push → :staging-latest = staging-branch code → staging-CP
  - main push    → :staging-<sha> for canary, :staging-latest retag
                   (post-promote main code), and after canary green
                   → :latest for prod tenants

What this does NOT change:
  - canary-verify.yml flow (still main-only)
  - redeploy-tenants-on-main.yml (still rolls prod fleet on main push)
  - publish-canvas-image.yml (self-hosted standalone canvas; orthogonal)
  - The :latest tag (canary-verified main, unchanged)

What this does fix:
  - RFC #2312-class fixes that land on staging now actually reach
    staging tenants without waiting for staging→main promote.
  - The dogfooding observation "staging tenants seem to be running
    yesterday's code" disappears as a class.

Drive-by: also fixed the typo in the path-filter list (was
`publish-platform-image.yml`, the actual file is
`publish-workspace-server-image.yml`).
2026-04-29 21:00:56 -07:00
Hongming Wang
86d9cb8b55
Merge pull request #2334 from Molecule-AI/auto/chat-files-comment-update
docs(chat_files): update header — Download is HTTP-forward, not docker-cp
2026-04-30 03:32:06 +00:00
Hongming Wang
82f73b1fa3 docs(chat_files): update header — Download is HTTP-forward, not docker-cp
The header comment claimed:
  "file upload (HTTP-forward) + download (Docker-exec)"
and:
  "Download still uses the v1 docker-cp path; migrating it lives in
   the next PR in this stack"

Both wrong now. RFC #2312 PR-D landed the Download HTTP-forward path:
chat_files.go:336 builds an http.NewRequestWithContext to
${wsURL}/internal/file/read?path=<abs>, with the response streamed
back to the caller. The workspace-side Starlette handler is at
workspace/internal_file_read.py, mounted at workspace/main.py:440.

Update the header to reflect actual code: both upload + download are
HTTP-forward, share the same per-workspace platform_inbound_secret
auth, and work uniformly on local Docker and SaaS EC2.

Pure docs change — no behavior, no build/test impact.
2026-04-29 20:28:58 -07:00
Hongming Wang
75885d6017
Merge pull request #2333 from Molecule-AI/auto/a2a-queue-status-tier1
feat(a2a): per-queue-id status endpoint + per-message TTL (RFC #2331 Tier 1)
2026-04-30 03:24:16 +00:00
Hongming Wang
b6d223cd0a feat(a2a): per-queue-id status endpoint + per-message TTL (RFC #2331 Tier 1)
Closes the observability gap surfaced in #2329 item 5: callers received
queue_id in the 202 enqueue response but had no public lookup. The only
existing observability path was check_task_status (delegation-flavored
A2A only — joins via request_body->>'delegation_id'). Cross-workspace
peer-direct A2A had no observability after enqueue.

This PR ships RFC #2331's Tier 1: minimum viable observability + caller-
specified TTL. No schema migration — expires_at column already exists
(migration 042); only DequeueNext was honoring it, with no caller path
to populate it.

Two changes:

1. extractExpiresInSeconds(body) — new helper mirroring
   extractIdempotencyKey/extractDelegationIDFromBody. Pulls
   params.expires_in_seconds from the JSON-RPC body. Zero (the unset
   default) preserves today's infinite-TTL semantics. EnqueueA2A grew
   an expiresAt *time.Time parameter; the proxy callsite computes
   *time.Time from the extracted seconds and threads it through to
   the INSERT.

2. GET /workspaces/:id/a2a/queue/:queue_id — new public handler.
   Auth: caller's workspace token must match queue.caller_id OR
   queue.workspace_id, OR be an org-level token. 404 (not 403) on
   auth failure to avoid leaking queue_id existence. Response
   includes status/attempts/last_error/timestamps/expires_at; embeds
   response_body via LEFT JOIN against activity_logs when status=
   completed for delegation-flavored items.

What this does NOT change:
  - Drain semantics (heartbeat-driven dispatch).
  - Native-session bypass (claude-agent-sdk, hermes still skip queue).
  - Schema (column already exists).
  - MCP tools (delegate_task_async / check_task_status keep their
    contract; this is a parallel queue-id surface).

Tests:
  - 7 cases on extractExpiresInSeconds covering absent/positive/
    zero/negative/invalid-JSON/wrong-type/empty-params.
  - go vet + go build clean.
  - Full handlers test suite passes (no regressions from the
    EnqueueA2A signature change — only one production caller).

Tier 2 (cross-workspace stitch + webhook callback) and Tier 3
(controllerized lifecycle) deferred per RFC #2331.
2026-04-29 20:21:17 -07:00
Hongming Wang
d5d8de946f
Merge pull request #2330 from Molecule-AI/auto/dev-start-go-detection
fix(dev-start): detect missing Go, fall back to docker-compose platform
2026-04-30 03:06:53 +00:00
Hongming Wang
88da3d523b fix(dev-start): detect missing Go and fall back to docker-compose platform
Issue: scripts/dev-start.sh assumed `go` was on PATH; on a fresh dev
box without Go installed, line 111 (`go run ./cmd/server`) failed
with `go: not found` and the script bailed before printing the
readiness banner. The script's own prerequisite list (line 13-21)
said "Go 1.25+" but there was no signpost between "open the doc" and
"command not found."

Fix: detect `go` via `command -v`. If present, keep the existing
`go run` path (fast iteration, attaches to local log). If not,
fall back to `docker compose up -d --build platform` which uses the
published platform container — slower first run but the script
still works without forcing the dev to install Go just to read logs.
Either path leaves /health on :8080 so the rest of the script's
wait loop is unchanged.

If both paths fail, the error message names the install URL
(https://go.dev/dl/) and the fallback diagnostic (`/tmp/molecule-platform.log`)
so the dev has a single, actionable next step.

Verified: `sh -n` syntax check passes.

Closes #2329 item 2.
2026-04-29 20:04:37 -07:00
Hongming Wang
16796431b9
Merge pull request #2328 from Molecule-AI/auto/sweep-cf-tunnel-orphans
feat(ops): sweep-cf-tunnels janitor — orphan tunnels accumulate forever
2026-04-30 02:45:16 +00:00
Hongming Wang
3a6d2f179d feat(ops): add sweep-cf-tunnels janitor — orphan Cloudflare Tunnels accumulate
CP's tenant-delete cascade removes the DNS record (with sweep-cf-orphans
as a backstop) but does NOT delete the underlying Cloudflare Tunnel.
Each E2E provision creates one Tunnel named `tenant-<slug>`; without
cleanup these accumulate indefinitely on the account, consuming the
tunnel quota and cluttering the dashboard.

Observed 2026-04-30: dozens of `tenant-e2e-canvas-*` tunnels in Down
state with zero replicas, weeks past their tenant's deletion. Same
class of bug as the DNS-records leak that drove sweep-cf-orphans
(controlplane#239).

Parallel-shape to sweep-cf-orphans:
  - Same dry-run-by-default + --execute pattern
  - Same MAX_DELETE_PCT safety gate (default 90% — higher than DNS
    sweep's 50% because tenant-shaped tunnels are orphans by design)
  - Same schedule/dispatch hardening (hard-fail on missing secrets
    when scheduled, soft-skip when dispatched)
  - Cron offset to :45 to avoid CF API bursts colliding with the DNS
    sweep at :15

Decision rules (in order):
  1. Name doesn't match `tenant-<slug>` → keep (unknown — never sweep
     tunnels that might belong to platform infra).
  2. Tunnel has active connections (status=healthy or non-empty
     connections array) → keep (defense-in-depth: don't kill a live
     tunnel even if CP forgot the org).
  3. Slug ∈ {prod_slugs ∪ staging_slugs} → keep.
  4. Otherwise → delete (orphan).

Verified by:
  - shell syntax check (bash -n)
  - YAML lint
  - Decide-logic offline smoke (7 cases, all pass)
  - End-to-end dry-run smoke with stubbed CP + CF APIs

Required secrets (added to existing org-secrets):
  CF_API_TOKEN          must include account:cloudflare_tunnel:edit
                        scope (separate from zone:dns:edit used by
                        sweep-cf-orphans — same token if scope is
                        broad, or a new token if narrowly scoped).
  CF_ACCOUNT_ID         account that owns the tunnels (visible in
                        dash.cloudflare.com URL path).
  CP_PROD_ADMIN_TOKEN   reused from sweep-cf-orphans.
  CP_STAGING_ADMIN_TOKEN reused from sweep-cf-orphans.

Note: CP-side root cause (tenant-delete should cascade to tunnel
delete) is in molecule-controlplane and worth fixing separately. This
janitor is the operational backstop in the meantime — same pattern
applied to DNS records when the same root cause was unaddressed.
2026-04-29 19:42:47 -07:00
Hongming Wang
a603dc449f
Merge pull request #2327 from Molecule-AI/auto/fix-canvas-e2e-teardown-race
fix(e2e-canvas): kill teardown race that poisons concurrent runs (unblocks staging→main #2264)
2026-04-30 02:26:44 +00:00
Hongming Wang
15b98c4916 fix(e2e-canvas): kill teardown race that poisons concurrent runs
Setup wrote .playwright-staging-state.json at the END (step 7), only
after org create + provision-wait + TLS + workspace create + workspace-
online all succeeded. If setup crashed at steps 1-6, the org existed in
CP but the state file did not, so Playwright's globalTeardown bailed
out ("nothing to tear down") and the workflow safety-net pattern-swept
every e2e-canvas-<today>-* org to compensate. That sweep deleted
concurrent runs' live tenants — including their CF DNS records —
causing victims' next fetch to die with `getaddrinfo ENOTFOUND`.

Race observed 2026-04-30 on PR #2264 staging→main: three real-test
runs killed each other mid-test, blocking 68 commits of staging→main
promotion.

Fix: write the state file as setup's first action, right after slug
generation, before any CP call. Now:

  - Crash before slug gen        → no state file, no orphan to clean
  - Crash during steps 1-6       → state file has slug; teardown deletes
                                   it (DELETE 404s if org never created)
  - Setup completes              → state file has full state; teardown
                                   deletes the slug

The workflow safety-net no longer pattern-sweeps; it reads the state
file and deletes only the recorded slug. Concurrent canvas-E2E runs no
longer poison each other.

Verified by:
  - tsc --noEmit on staging-setup.ts + staging-teardown.ts
  - YAML lint on e2e-staging-canvas.yml
  - Code review: state file write moved to line 113 (post-makeSlug,
    pre-CP) with the original line-249 write retained as a "promote
    to full state" overwrite at the end
2026-04-29 19:23:56 -07:00
Hongming Wang
e3588d4934
Merge pull request #2326 from Molecule-AI/auto/issue-2169-railway-pin-audit-cron
ci: daily Railway pin-audit cron + issue-on-failure (#2169)
2026-04-30 00:46:12 +00:00
Hongming Wang
0b1d4f294b
Merge pull request #2304 from Molecule-AI/docs/molecule-channel-plugin-pointer
docs: surface molecule-mcp-claude-channel plugin in external-workspace flow + CONTRIBUTING
2026-04-30 00:45:51 +00:00
Hongming Wang
c8205b009a ci: daily Railway pin-audit cron + issue-on-failure (#2169)
Acceptance criterion 3 of #2001 ("CI check that fails if TENANT_IMAGE
contains a SHA-shaped suffix") was deferred from PR #2168 because
querying Railway from a GitHub Actions runner needs RAILWAY_TOKEN
plumbed as a repo secret. The detection script + regression test in
#2168 cover detection; this is the automation-cadence layer.

Daily 13:00 UTC schedule (06:00 PT) + workflow_dispatch. Daily is the
right cadence for variables-tier config — Railway env var changes are
deliberate operator actions, low-frequency. Hourly would risk Railway
API rate-limit surprises.

Issue-on-failure pattern mirrors e2e-staging-sanity.yml — drift opens
a `railway-drift` priority-high issue (or comments on the open one),
and a subsequent clean run auto-closes it with a "drift resolved"
comment. No human-in-the-loop needed for the close.

Schedule-vs-dispatch secret hardening per
feedback_schedule_vs_dispatch_secrets_hardening:
- Schedule trigger HARD-FAILS on missing RAILWAY_AUDIT_TOKEN
  (silent-success was the failure mode that bit us before)
- workflow_dispatch SOFT-SKIPS so an operator can dry-run the
  workflow shape during initial token provisioning

Operator action required before this gate is live:
- Provision a Railway API token, read-only `variables` scope on the
  molecule-platform project (id 7ccc8c68-61f4-42ab-9be5-586eeee11768)
- Store as repo secret RAILWAY_AUDIT_TOKEN
- Rotate per the standard 90-day schedule

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 17:43:01 -07:00
Hongming Wang
d3b2e9e61d
Merge pull request #2325 from Molecule-AI/auto/fix-path-filter-check-name-parity-2326
ci: fix path-filter check-name parity in e2e-api + e2e-staging-canvas (unblocks staging→main #2264)
2026-04-30 00:32:08 +00:00
Hongming Wang
c79cf1cfa9 ci: collapse two-jobs-sharing-name path-filter pattern in e2e-api/e2e-staging-canvas
Branch protection treats matching-name check runs as a SET — any SKIPPED
member fails the required-check eval, even with SUCCESS siblings. The
two-jobs-sharing-name pattern (no-op + real-job) emits one SKIPPED + one
SUCCESS check run per workflow run; with multiple runs at the same SHA
(detect-changes triggers + auto-promote re-runs) the SET fills with
SKIPPED entries that block branch protection.

Verified live on PR #2264 (staging→main auto-promote): mergeStateStatus
stayed BLOCKED for 18+ hours despite APPROVED + MERGEABLE + all gates
green at the workflow level. `gh pr merge` returned "base branch policy
prohibits the merge"; `enqueuePullRequest` returned "No merge queue
found for branch 'main'". The check-runs API showed `E2E API Smoke
Test` and `Canvas tabs E2E` each had 2 SKIPPED + 2 SUCCESS at head SHA
66142c1e.

Fix: collapse no-op + real-job into ONE job with no job-level `if:`,
gating real work via per-step `if: needs.detect-changes.outputs.X ==
'true'`. The job always runs and emits exactly one SUCCESS check run
under the required-check name regardless of paths-filter outcome —
branch-protection-clean.

Same pattern as ci.yml's earlier conversion of Canvas/Platform/Python/
Shellcheck (PR #2322). Closes the parity-fix that should have been
applied to all four path-filtered required checks at once.
2026-04-29 17:29:44 -07:00
Hongming Wang
66142c1eab
Merge pull request #2319 from Molecule-AI/auto/issue-2312-pr-f-saas-secret-delivery
feat(saas): deliver platform_inbound_secret via /registry/register (RFC #2312, PR-F)
2026-04-29 23:53:03 +00:00
Hongming Wang
5d34abd5b5 Merge remote-tracking branch 'origin/staging' into auto/issue-2312-pr-f-saas-secret-delivery
# Conflicts:
#	scripts/build_runtime_package.py
2026-04-29 16:46:23 -07:00
Hongming Wang
5806feadcc
Merge pull request #2314 from Molecule-AI/auto/issue-2312-pr-b-workspace-ingest
feat(workspace): /internal/chat/uploads/ingest endpoint (RFC #2312, PR-B)
2026-04-29 23:40:19 +00:00
Hongming Wang
01696281b3
Merge pull request #2324 from Molecule-AI/auto/issue-2244-latest-ancestry-check
ci: ancestry-check on auto-promote :latest (#2244)
2026-04-29 23:26:34 +00:00
Hongming Wang
57b0b4b4d1
Merge pull request #2323 from Molecule-AI/auto/issue-2306-bearer-derive-callerid
fix(a2a_proxy): derive callerID from bearer when X-Workspace-ID absent (#2306)
2026-04-29 23:26:33 +00:00
Hongming Wang
f7b9feb34f ci: ancestry-check on auto-promote :latest (#2244)
Two rapid main pushes whose E2Es complete out-of-order can promote
:latest backwards: SHA-A merges, SHA-B merges, SHA-B's E2E completes
first → :latest = staging-B → SHA-A's E2E completes → :latest = staging-A.
Now :latest is older than main's tip and stays wrong until the next
main push lands. The orphan-reconciler "next run corrects it" pattern
doesn't apply because there's no auto-corrective re-promote.

Detection: read the current :latest's `org.opencontainers.image.revision`
label (set by publish-workspace-server-image.yml at build time) and ask
the GitHub compare API how the candidate SHA relates to current. Branch
on `.status`:

  ahead     → retag (target newer)
  identical → retag is a no-op
  behind    → HARD FAIL (this is the race we're catching)
  diverged  → HARD FAIL (force-push or unusual history)
  error     → fail; manual dispatch can override

Hard-fail rather than soft-skip per the approved design — silent-bypass
is the class we're moving away from per
feedback_schedule_vs_dispatch_secrets_hardening. Workflow goes red,
oncall sees it, operator decides whether to retry, force-promote, or
investigate. Manual dispatch skips the check (operator override),
matching the gate-step's existing semantics.

Backward-compat: when current :latest carries no revision label
(legacy image), skip-with-warning. All :latest images on main are
post-label as of 2026-04-29, so this branch becomes dead within 90 days
— TODO note in the step explains the cleanup.

No tests — the race is hypothetical at our scale (<1 occurrence/year
expected for a fleet of ≤20 paying tenants), and the only way to
exercise the new branches is to construct production-shape image
state. The dry-fall path lands behind the existing E2E gate-check, so
a regression in this step would surface as a failed promote (visible),
not a silent advance (invisible).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 16:18:42 -07:00
Hongming Wang
83e3fe436f Merge remote-tracking branch 'origin/staging' into auto/issue-2312-pr-b-workspace-ingest 2026-04-29 16:18:01 -07:00
Hongming Wang
ecb6ad3c2a
Merge pull request #2322 from Molecule-AI/auto/issue-canvas-noop-fix-v2
ci: collapse Canvas (Next.js) to single job with conditional steps (supersedes #2321)
2026-04-29 23:12:37 +00:00
Hongming Wang
142b8e9d5b ci: collapse all 4 path-filtered required checks to single-job-with-conditional-steps
Supersedes #2321 + #2322. Applies the same shape uniformly across every
required check that uses a path filter: Canvas (Next.js), Platform (Go),
Python Lint & Test, Shellcheck (E2E scripts).

The bug + fix in one paragraph:

GitHub registers a check run for every job whose `name:` matches the
required-check context, regardless of whether the job actually executed.
A job-level `if:` that evaluates false produces a SKIPPED check run.
Branch protection's "required check" rule looks at the SET of check
runs with the matching context name on the latest commit and treats
any conclusion other than SUCCESS as not-passed — including SKIPPED.
Adding a sibling no-op job under the same `name:` (PR #2321 / #2322
attempt) doesn't help: branch protection still sees the SKIPPED
sibling and stays BLOCKED.

The shape that works: ONE job per required check name, no job-level
`if:`, all real work gated per-step. The job always runs and reports
SUCCESS regardless of which paths changed.

This patch:
  * Canvas (Next.js): drops the `canvas-build-noop` shadow added in
    #2321 (which didn't actually clear merge state — verified live on
    PR #2314). Refactors `canvas-build` to always run; gates checkout/
    setup-node/install/build/test on `if: needs.changes.outputs.canvas
    == 'true'`. Coverage upload step also gated.
  * Platform (Go): drops job-level `if:`. Gates checkout/setup-go/
    download/build/vet/lint/test/coverage-report/threshold-check on
    per-step `if:`.
  * Python Lint & Test: drops job-level `if:`. Gates checkout/setup-
    python/install/pytest on per-step `if:`.
  * Shellcheck (E2E scripts): drops job-level `if:`. Gates checkout/
    shellcheck-run on per-step `if:`.

Each refactored job adds a leading no-op echo step with `working-directory: .`
override so the always-running spin-up doesn't fail when the path-
filter-true working-directory (workspace, workspace-server, canvas)
doesn't exist after no-op checkout.

Why all four in one PR: the bug shape is identical across all four,
and a future PR that only touches workspace-server (passing platform
filter, missing canvas/python/scripts) would hit the same BLOCKED state
on whichever filter it missed. PR-A and PR-2321 merged because their
diffs happened to trigger every filter; PR-B (#2314) only missed
canvas. Fixing one at a time means re-living this debugging cycle three
more times.

Cost: ~10s of always-on CI runtime per PR per job (the ubuntu-latest
spin-up + the no-op echo). 40s aggregate, negligible vs. the manual-
merge cost when BLOCKED catches us.

Memory `feedback_branch_protection_check_name_parity` already updated
(2026-04-29) to mark the original two-jobs-sharing-name pattern as
DO NOT FOLLOW and document the working shape this PR uses.

Refs PR #2321 (the misguided fix-attempt that this supersedes).
2026-04-29 16:09:22 -07:00
Hongming Wang
ca6fc55c8b fix(a2a_proxy): derive callerID from bearer when X-Workspace-ID absent (#2306)
External callers (third-party SDKs, the channel plugin) authenticate
purely via bearer and frequently don't set the X-Workspace-ID header.
Without this, activity_logs.source_id ends up NULL — breaking the
peer_id signal on notifications, the "Agent Comms by peer" canvas tab,
and any analytics that breaks down inbound A2A by sender.

The bearer is the authoritative caller identity per the wsauth contract
(it's what proves who you are); the header is a display/routing hint
that must agree with it. So we derive callerID from the bearer's owning
workspace whenever the header is absent. The existing validateCallerToken
guard fires after this and enforces token-to-callerID binding the same
way it always has.

Org-token requests are skipped — those grant org-wide access and don't
bind to a single workspace, so the canvas-class semantics (callerID="")
are preserved. Bearer-resolution failures (revoked, removed workspace)
fall through to canvas-class as well, never 401.

New wsauth.WorkspaceFromToken exposes the bearer→workspace lookup as a
modular interface; mirrors ValidateAnyToken's defense-in-depth JOIN on
workspaces.status != 'removed'.

Tests: 4 unit tests on WorkspaceFromToken + 3 integration tests on
ProxyA2A covering the three observable paths (bearer-derived,
org-token skipped, derive-failure fallthrough).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 16:05:56 -07:00
Hongming Wang
e22a56d351 ci: collapse Canvas (Next.js) to single job with conditional steps
Supersedes PR #2321's two-jobs-sharing-a-name approach, which didn't
actually clear branch-protection's required-check evaluation. Live
test on PR #2314: GraphQL `isRequired` confirmed BOTH check runs
under "Canvas (Next.js)" name (one SUCCESS via no-op, one SKIPPED via
real job) registered, and the SKIPPED one kept mergeStateStatus =
BLOCKED despite the SUCCESS sibling. Branch protection's "set of
matching contexts" semantic is stricter than the durable feedback
memory documented — at least one passing isn't enough; SKIPPED
counts as not-passed regardless.

Real fix: ONE job that always runs (no job-level `if:`), with all
real work gated on the path filter via per-step `if:`. Produces
exactly one "Canvas (Next.js)" check run per commit, always SUCCEEDS,
regardless of which paths changed. Costs ~10s of always-on CI runtime
per PR — negligible vs. the manual-merge cost when the BLOCKED state
catches us.

This same anti-pattern probably affects Platform (Go) (`platform`
filter), Python Lint & Test (`python` filter), and Shellcheck (E2E
scripts) (`scripts` filter) — all required, all path-gated. PR-A and
PR-2321 merged because they happened to trigger every filter; PR-B
only missed canvas. File a follow-up issue to apply the same
single-job-conditional-steps pattern across those required jobs to
remove the latent merge-blocker.

Updates feedback memory: branch_protection_check_name_parity is wrong
about "two jobs sharing name + at-least-one-success works." Need to
correct the note.
2026-04-29 16:01:38 -07:00