publish-runtime.yml had a broad smoke (AgentCard call-shape, well-known
mount alignment, new_text_message) inline as a heredoc. runtime-prbuild-
compat.yml had a narrow inline smoke (just `from main import main_sync`).
Result: a PR could introduce SDK shape regressions that pass at PR time
and only fail at publish time, post-merge.
Extract the broad smoke into scripts/wheel_smoke.py and invoke it from
both workflows. PR-time gate now matches publish-time gate — same script,
same assertions. Eliminates the drift hazard of two heredocs that have
to be kept in lockstep manually.
Verified locally:
* Built wheel from workspace/ source, installed in venv, ran smoke → pass
* Simulated AgentCard kwarg-rename regression → smoke catches it as
`ValueError: Protocol message AgentCard has no "supported_interfaces"
field` (the exact failure mode of #2179 / supported_protocols incident)
Path filter for runtime-prbuild-compat extended to include
scripts/wheel_smoke.py so smoke-only edits get PR-validated. publish-
runtime path filter intentionally NOT extended — smoke-only edits should
not auto-trigger a PyPI version bump.
Subset of #131 (the broader "invoke main() against stub config" goal
remains pending — main() needs a config dir + stub platform server).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Self-review of #2403 caught a regression: with a 1-tenant fleet (the
exact case the original #2402 fix targeted), the new floor would
re-introduce the flake. Trace:
TOTAL=1, UNREACHABLE=1, $((1/2))=0
if 1 -gt 0 → TRUE → exit 1
The 50%-rule only meaningfully distinguishes "real outage" from
"teardown race" when the fleet is large enough that "half down" is
statistically meaningful. With 1-3 tenants, canary-verify is the
actual gate (it runs against the canary first and aborts the rollout
if the canary fails to come up).
Gate the floor on TOTAL_VERIFIED >= 4. Truth table:
TOTAL UNREACHABLE RESULT
1 1 soft-warn (original e2e flake case)
4 2 soft-warn (exactly half)
4 3 hard-fail (75% — real outage)
10 6 hard-fail (60% — real outage)
Mirrored across staging.yml + main.yml.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Belt-and-suspenders sanity floor on top of the unreachable-soft-warn
introduced earlier in this PR. Addresses the residual gap noted in
review: if a new image crashes on startup, every tenant ends up
unreachable, and the soft-warn alone would let that ship as a green
deploy. Canary-verify catches it on the canary tenant first, but this
guard is a fallback for canary-skip dispatches and same-batch races.
Threshold is 50% of healthz_ok-snapshotted tenants — comfortably above
the typical e2e-* teardown rate (5-10/hour, ~1 ephemeral tenant per
batch) but below any plausible real-outage scenario.
Mirrored across staging.yml + main.yml for shape parity.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Three findings from re-reviewing PR #2401 with fresh eyes:
1. Critical — port binding to 0.0.0.0
compose.yml's cf-proxy bound 8080:8080 (default 0.0.0.0). The harness
uses a hardcoded ADMIN_TOKEN so anyone on the local network or VPN
could hit /workspaces with admin privileges. Switch to 127.0.0.1:8080
so admin access is loopback-only — safe for E2E and prevents the
known-token leak.
2. Required — dead code in cp-stub
peersFailureMode + __stub/mode + __stub/peers were declared with
atomic.Value setters but no handler ever READ from them. CP doesn't
host /registry/peers (the tenant does), so the toggles couldn't
drive responses. Removed the dead vars + handlers; kept
redeployFleetCalls counter and __stub/state since those have a real
consumer in the buildinfo replay.
3. Required — replay's auth-context dependency
peer-discovery-404.sh's Python eval ran a2a_client.get_peers_with_
diagnostic() against the live tenant. Without a workspace token
file, auth_headers() yields empty headers — so the helper might
exercise a 401 branch instead of the 404 branch the replay claims
to test.
Split the assertion into (a) WIRE — direct curl proves the platform
returns 404 from /registry/<unregistered>/peers — and (b) PARSE —
feed the helper a mocked 404 via httpx patches, no network/auth.
Each branch tests exactly what it claims.
Also added a graceful skip when the workspace runtime in the
current checkout pre-dates #2399 (no get_peers_with_diagnostic
yet) — replay falls back to wire-only verification with a clear
message instead of an opaque AttributeError. After #2399 lands on
staging, both branches will run.
cp-stub still builds clean. compose.yml validates. Replay's bash
syntax + Python eval both verified locally.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The /buildinfo verify step (PR #2398) was treating "no /buildinfo response"
the same as "tenant returned wrong SHA" — both bumped MISMATCH_COUNT and
hard-failed the workflow. First post-merge run on staging caught a real
edge case: ephemeral E2E tenants (slug e2e-20260430-...) get torn down by
the E2E teardown trap between CP's healthz_ok snapshot and the verify step
running, so the verify step would dial into DNS that no longer resolves
and hard-fail on a benign condition.
The bug class we actually care about is STALE (tenant up + serving old
code, the #2395 root). UNREACHABLE post-redeploy is almost always a benign
teardown race; real "tenant up but unreachable" is caught by CP's own
healthz monitor + the alert pipeline, so double-counting it here was
making this workflow flaky on every staging push that overlapped E2E.
Wire:
- Split MISMATCH_COUNT into STALE_COUNT + UNREACHABLE_COUNT.
- STALE → hard-fail the workflow (the bug class we're guarding).
- UNREACHABLE → :⚠️:, don't fail. Reachable-mismatch still hard-fails.
- Job summary surfaces both lists separately so on-call can tell at a
glance which class fired.
Mirror in redeploy-tenants-on-main.yml for shape parity (prod has fewer
ephemeral tenants but identical asymmetry would be a gratuitous fork).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The harness brings up the SaaS tenant topology on localhost using the
SAME workspace-server/Dockerfile.tenant image that ships to production.
Tests run against http://harness-tenant.localhost:8080 and exercise the
same code path a real tenant takes:
client
→ cf-proxy (nginx; CF tunnel + LB header rewrites)
→ tenant (Dockerfile.tenant — combined platform + canvas)
→ cp-stub (minimal Go CP stand-in for /cp/* paths)
→ postgres + redis
Why this exists: bugs that survive `go run ./cmd/server` and ship to
prod almost always live in env-gated middleware (TenantGuard, /cp/*
proxy, canvas proxy), header rewrites, or the strict-auth / live-token
mode. The harness activates ALL of them locally so #2395 + #2397-class
bugs can be reproduced before deploy.
Phase 1 surface:
- cp-stub/main.go: minimal CP stand-in. /cp/auth/me, redeploy-fleet,
/__stub/{peers,mode,state} for replay scripts. Catch-all returns
501 with a clear message when a new CP route appears.
- cf-proxy/nginx.conf: rewrites Host to <slug>.localhost, injects
X-Forwarded-*, disables buffering to mirror CF tunnel streaming
semantics.
- compose.yml: one service per topology layer; tenant builds from
the actual production Dockerfile.tenant.
- up.sh / down.sh / seed.sh: lifecycle scripts.
- replays/peer-discovery-404.sh: reproduces #2397 + asserts the
diagnostic helper from PR #2399 surfaces "404" + "registered".
- replays/buildinfo-stale-image.sh: reproduces #2395 + asserts
/buildinfo wire shape + GIT_SHA injection from PR #2398.
- README.md: topology, quickstart, what the harness does NOT cover.
Phases 2-3 (separate PRs):
- Phase 2: convert tests/e2e/test_api.sh to target the harness URL
instead of localhost; make harness-based replays a required CI gate.
- Phase 3: config-coherence lint that diffs harness env list against
production CP's env list, fails CI on drift.
Verification:
- cp-stub builds (go build ./...).
- cp-stub responds to all stubbed endpoints (smoke-tested locally).
- compose.yml passes `docker compose config --quiet`.
- All shell scripts pass `bash -n` syntax check.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes#2397. Today, every empty-peer condition (true empty, 401/403, 404,
5xx, network) collapses to a single message: "No peers available (this
workspace may be isolated)". The user has no way to tell whether they need
to provision more workspaces (true isolation), restart the workspace
(auth), re-register (404), page on-call (5xx), or check network (timeout) —
five different operator actions, one ambiguous string.
Wire:
- new helper get_peers_with_diagnostic() in a2a_client.py returns
(peers, error_summary). error_summary is None on 200; a short
actionable string on every other branch.
- get_peers() now shims through it so non-tool callers (system-prompt
formatters) keep the bare-list contract.
- tool_list_peers() switches to the diagnostic helper and surfaces the
actual reason. The "may be isolated" string is removed; true empty
now reads "no peers in the platform registry."
Tests:
- TestGetPeersWithDiagnostic: 200, 200-empty, 401, 403, 404, 5xx,
network exception, 200-but-non-list-body, and the bare-list-shim
regression guard.
- TestToolListPeers: each diagnostic branch surfaces its reason +
explicit assertion that "may be isolated" is gone.
Coverage 91.53% (floor 86%). 122 a2a tests pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the gap that let issue #2395 ship: redeploy-fleet workflows reported
ssm_status=Success based on SSM RPC return code alone, while EC2 tenants
silently kept serving the previous :latest digest because docker compose up
without an explicit pull is a no-op when the local tag already exists.
Wire:
- new buildinfo package exposes GitSHA, set at link time via -ldflags from
the GIT_SHA build-arg (default "dev" so test runs without ldflags fail
closed against an unset deploy)
- router exposes GET /buildinfo returning {git_sha} — public, no auth,
cheap enough to curl from CI for every tenant
- both Dockerfiles thread GIT_SHA into the Go build
- publish-workspace-server-image.yml passes GIT_SHA=github.sha for both
images
- redeploy-tenants-on-main.yml + redeploy-tenants-on-staging.yml curl each
tenant's /buildinfo after the redeploy SSM RPC and fail the workflow on
digest mismatch; staging treats both :latest and :staging-latest as
moving tags; verification is skipped only when an operator pinned a
specific tag via workflow_dispatch
Tests:
- TestGitSHA_DefaultDevSentinel pins the dev default
- TestBuildInfoEndpoint_ReturnsGitSHA pins the wire shape that the
workflow's jq lookup depends on
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two related fixes after self-review of #2396:
1. workspace_bootstrap.go:62 — `SET status = 'failed'` was missed in the
initial sweep. Now parameterized as $3 with models.StatusFailed.
Test fixed with the additional WithArgs sentinel.
2. Drift gate now scans production .go AST for hard-coded
`UPDATE workspaces … SET status = '<literal>'` and fails with
file:line. This catches the kind of miss the first commit just
fixed — the original migration-vs-codebase axis only verified
AllWorkspaceStatuses ⊆ enum, not "no raw literals in writes."
Verified the gate fires: dropped a synthetic 'failed' literal into
internal/handlers/_drift_sanity.go and confirmed the gate flagged
"internal/handlers/_drift_sanity.go:6 → SET status = 'failed'".
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Eliminate raw 'awaiting_agent'/'hibernating'/'failed'/etc string literals
from production status writes. Adds models.WorkspaceStatus typed alias and
models.AllWorkspaceStatuses canonical slice; every UPDATE workspaces SET
status = ... now passes a parameterized $N typed value rather than a
hard-coded SQL literal.
Defense-in-depth follow-up to migration 046 (#2388): the Postgres enum
type was missing 'awaiting_agent' + 'hibernating' for ~5 days because
sqlmock regex matching cannot enforce live enum constraints. The drift
gate is now a proper Go AST + SQL parser (no regex), asserting the
codebase ⊆ migration enum and every const appears in the canonical
slice. With status as a parameterized typed value, future enum mismatches
fail at the SQL layer in tests, not silently in prod.
Test coverage: full suite passes with -race; drift gate green.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
When gh run list returns [] (no E2E run on the main SHA — the common
case for canvas-only / cmd-only / sweep-only changes whose paths
don't trigger E2E), jq's `.[0]` is null and the interpolation
`"\(null)/\(null // "none")"` produces "null/none". The case
statement has no `null/none)` branch, so it falls into `*)` →
exit 1 → auto-promote-on-e2e fails → `:latest` doesn't get retagged
to the new SHA → tenants on `redeploy-tenants-on-main` end up
pulling the OLD `:latest` digest.
Surfaced 2026-04-30 17:00Z as the first observable consequence of
PR #2389 (App-token dispatch fix). Every prior auto-promote-on-e2e
run was triggered by E2E completion (the "Upstream is E2E itself"
short-circuit at line 151 fired before reaching the gate). #2389
made publish-image's completion event correctly fire workflow_run
listeners — auto-promote-on-e2e is one of those listeners — and
hit the latent jq bug on the first publish-upstream run.
Fix: change `.[0]` to `(.[0] // {})` in the jq filter so the empty-
array case becomes `none/none` (the documented "E2E paths-filtered
out for this SHA — proceed" branch) instead of the unhandled
`null/none`. Also default `.status` for the same defensive reason.
Verified the three input shapes locally:
[] → "none/none" ✓
[{status:completed,conclusion:success}] → "completed/success" ✓
[{status:in_progress,conclusion:null}] → "in_progress/none" ✓
Outer `|| echo "none/none"` fallback retained as defense-in-depth
for non-zero gh exits (network / auth failures).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Symmetric with the existing CPProvisionerAPI interface. Closes the
asymmetry where the SaaS provisioner field was an interface (mockable
in tests) but the Docker provisioner field was a concrete pointer
(not).
## Changes
- New ``provisioner.LocalProvisionerAPI`` interface — the 7 methods
WorkspaceHandler / TeamHandler call on h.provisioner today: Start,
Stop, IsRunning, ExecRead, RemoveVolume, VolumeHasFile,
WriteAuthTokenToVolume. Compile-time assertion confirms *Provisioner
satisfies it. Mirror of cp_provisioner.go's CPProvisionerAPI block.
- ``WorkspaceHandler.provisioner`` and ``TeamHandler.provisioner``
re-typed from ``*provisioner.Provisioner`` to
``provisioner.LocalProvisionerAPI``. Constructor parameter type is
unchanged — the assignment widens to the interface, so the 200+
callers of ``NewWorkspaceHandler`` / ``NewTeamHandler`` are
unaffected.
- Constructors gain a ``if p != nil`` guard before assigning to the
interface field. Without this, ``NewWorkspaceHandler(..., nil, ...)``
(the test fixture pattern across 200+ tests) yields a typed-nil
interface value where ``h.provisioner != nil`` evaluates *true*,
and the SaaS-vs-Docker fork incorrectly routes nil-fixture tests
into the Docker code path. Documented inline with reference to
the Go FAQ.
- Hardened the 5 Provisioner methods that lacked nil-receiver guards
(Start, ExecRead, WriteAuthTokenToVolume, RemoveVolume,
VolumeHasFile) — return ErrNoBackend on nil receiver instead of
panicking on p.cli dereference. Symmetric with Stop/IsRunning
(already hardened in #1813). Defensive cleanup so a future caller
that bypasses the constructor's nil-elision still degrades
cleanly.
- Extended TestZeroValuedBackends_NoPanic with 5 new sub-tests
covering the newly-hardened nil-receiver paths. Defense-in-depth:
a future refactor that drops one of the nil-checks fails red here
before reaching production.
## Why now
- Provisioner orchestration has been touched in #2366 / #2368 — the
interface symmetry is the natural follow-up captured in #2369.
- Future work (CP fleet redeploy endpoint, multi-backend
provisioners) wants this in place. Memory note
``project_provisioner_abstraction.md`` calls out pluggable
backends as a north-star.
- Memory note ``feedback_long_term_robust_automated.md`` —
compile-time gates + ErrNoBackend symmetry > runtime panics.
## Verification
- ``go build ./...`` clean.
- ``go test ./...`` clean — 1300+ tests pass, including the
previously-flaky Create-with-nil-provisioner paths that now
exercise the constructor's nil-elision correctly.
- ``go test ./internal/provisioner/ -run TestZeroValuedBackends_NoPanic
-v`` — all 11 nil-receiver subtests green (was 6, +5 for the
newly-hardened methods).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Root cause (verified 2026-04-30): GITHUB_TOKEN-initiated
workflow_dispatch creates the dispatched run, but the resulting run's
completion event does NOT fire downstream `workflow_run` triggers.
This is the documented "no recursion" rule:
https://docs.github.com/en/actions/using-workflows/triggering-a-workflow#triggering-a-workflow-from-a-workflow
Evidence (publish-workspace-server-image runs on main):
run_id | head_sha | triggering_actor | canary | redeploy
------------+-----------+-----------------------+--------+----------
25151545007 | 6ef562ee | HongmingWang-Rabbit | YES | YES
25171773918 | 21313dc | github-actions[bot] | NO | NO
25173801008 | 59dec57 | github-actions[bot] | NO | NO
The 06:52Z run that "worked" was an operator-fired dispatch from the
terminal — actor was the operator's PAT. The two runs that "dropped"
were dispatched by auto-promote-staging.yml's `gh workflow run` step
authenticated via `secrets.GITHUB_TOKEN`, so the actor became
`github-actions[bot]` and the workflow_run cascade was suppressed.
Same workflow file, same dispatch call, same successful publish run
— only the auth token differed.
Fix: mint a molecule-ai GitHub App installation token before the
dispatch step and use it as `GH_TOKEN`. App-initiated dispatches
DO propagate the workflow_run cascade (the App user is a real
identity, not the GITHUB_TOKEN bot pseudonym).
The molecule-ai App (app_id=3398844, installation 124443072) is
already installed on the org with `actions:write` — no new App
needed. Only secrets are missing.
## Required setup before merge
The following repo secrets must be added at
https://github.com/Molecule-AI/molecule-core/settings/secrets/actions
or auto-promote will hard-fail at the new "Mint App token" step:
- `MOLECULE_AI_APP_ID` = `3398844`
- `MOLECULE_AI_APP_PRIVATE_KEY` = contents of a .pem file generated at
https://github.com/organizations/Molecule-AI/settings/installations/124443072
(Click "Generate a private key" if one doesn't exist yet.)
## Long-term cleanup
The polling tail step still exists because the auto-merge call
itself uses GITHUB_TOKEN, so the FF push to main doesn't fire
publish-workspace-server-image's `push` trigger naturally. Switching
the auto-merge call to use the SAME App token would eliminate the
polling tail entirely. Tracked in #2357.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Migration 043 (2026-04-25) introduced the workspace_status enum but
omitted two values application code had been writing for days, so every
UPDATE that tried to write either value failed silently in production:
'awaiting_agent' (since 2026-04-24, commit 1e8b5e01):
- handlers/workspace.go:333 — external workspace pre-register
- handlers/registry.go (via PR #2382) — liveness offline transition
- registry/healthsweep.go (via PR #2382) — heartbeat-staleness sweep
'hibernating' (since hibernation feature shipped):
- handlers/workspace_restart.go:271 — DB-level claim before stop
All four/five sites swallowed the enum-cast error. User-visible impact:
external workspaces never transition to a stale state when their agent
disconnects (canvas shows them stuck on 'online'/'degraded' indefinitely),
new external workspaces never advance past 'provisioning', and idle
workspaces never auto-hibernate (resources held forever).
PR #2382 didn't *cause* this — it inherited the gap and added two more
silent-fail paths on top. The pre-existing two had been broken for five
days and went unnoticed because:
1. sqlmock matches SQL by regex, not against the live enum constraint.
Every test passed despite the prod-only failure.
2. The handlers either drop the Exec error entirely (workspace.go:333)
or log+continue without an alert (the other three).
Fix in three pieces:
1. migrations/046_*.up.sql — ALTER TYPE workspace_status ADD VALUE
'awaiting_agent', 'hibernating'. IF NOT EXISTS makes it idempotent
across re-runs (RunMigrations re-applies until schema_migrations
records the file). ALTER TYPE ADD VALUE doesn't take a heavy lock
and commits immediately, safe under live traffic.
2. migrations/046_*.down.sql — full rename → recreate → cast → drop
recipe. Postgres has no DROP VALUE so this is the only honest
rollback. Pre-flights existing rows to compatible values
(awaiting_agent → offline, hibernating → hibernated) before the
type swap.
3. internal/db/workspace_status_enum_drift_test.go — static gate that
parses every UPDATE/INSERT against `workspaces` in workspace-server/
internal/, extracts every status literal, and asserts each is in
the enum union (CREATE TYPE + every ALTER TYPE ADD VALUE). The gate
runs in unit tests, no DB required, and would have caught both
omissions on the day they shipped. Pattern matches
feedback_behavior_based_ast_gates and feedback_mock_at_drifting_layer.
Verification:
- go test ./internal/db/ -count=1 -race ✓
- go vet ./... ✓
- Drift gate flips red if I delete either ADD VALUE from the migration
(validated via local mutation).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Pin the 5 public functions adapters and the runtime hot-path import
through ``from platform_auth import``:
- ``auth_headers`` — every outbound httpx call merges this in
- ``self_source_headers`` — A2A peer + self-message header builder
- ``get_token`` — main.py reads on boot to decide register-vs-resume
- ``save_token`` — main.py persists the platform-issued token
- ``refresh_cache`` — 401-retry path drops in-process cache (#1877)
A grep across workspace/ shows 14+ runtime modules import these:
main.py, heartbeat.py, a2a_client.py, a2a_tools.py, consolidation.py,
events.py, executor_helpers.py (3 sites), molecule_ai_status.py,
builtin_tools/memory.py (3 sites), builtin_tools/temporal_workflow.py
(2 sites). Renaming any of the five (e.g. ``auth_headers`` →
``bearer_headers``) makes every one of those imports raise ImportError
at workspace boot — the failure surface is deep in heartbeat init,
nowhere near the rename site.
Same drift class as the BaseAdapter signature snapshot (#2378, #2380),
skill_loader gate (#2381), runtime_wedge gate (#2383). Reuses the
``_signature_snapshot.py`` helpers shipped in #2381.
Defense-in-depth: ``test_snapshot_has_required_functions`` asserts
the five names are still present, so removing one even with a
synchronized snapshot edit forces an explicit edit here with a
justification.
``clear_cache`` is intentionally NOT in the snapshot — it's a
test-only helper. Production code MUST NOT depend on it.
Verified red on deliberate rename: ``auth_headers`` →
``bearer_headers`` produces a clean diff of the missing function in
the failure message. Restored before commit.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The "do request → check err → defer close → forward headers → set
status → io.Copy → log mid-stream errors" tail was duplicated between
Upload and Download. Each handler had ~12 lines that differed only in:
- the op label in log messages ("upload" vs "download")
- the set of response headers to forward verbatim
(Upload: Content-Type only; Download: Content-Type +
Content-Length + Content-Disposition)
Hoist into ChatFilesHandler.streamWorkspaceResponse(c, op,
workspaceID, forwardURL, req, forwardHeaders). Each call site
reduces to one line. Future changes — request-id forwarding,
observability metric, response-size cap, bytes-streamed log —
go in ONE place rather than two.
Same drift-prevention rationale as resolveWorkspaceForwardCreds
(#2372) and readOrLazyHealInboundSecret (#2376), applied to the
response-streaming layer of the same handlers.
Behavior preserved: existing TestChatUpload_* and TestChatDownload_*
integration tests (8 across both handlers) all pass unchanged. The
log message format is consistent across both handlers now (single
"chat_files {op}: ..." string template) — operators can grep one
prefix for both features instead of separate prefixes per handler.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
BaseAdapter docstring tells adapter authors:
> ``runtime_wedge.mark_wedged()`` / ``clear_wedge()`` — flip the
> workspace to ``degraded`` + auto-recover when your SDK hits a
> non-recoverable error class. Import directly from ``runtime_wedge``;
> the heartbeat forwards the state to the platform automatically.
That's a contract — adapter templates depend on the four module-level
functions (``is_wedged``, ``wedge_reason``, ``mark_wedged``,
``clear_wedge``) being importable by those exact names with those
exact signatures. Renaming any silently breaks every adapter that
calls them: the import resolves the module fine, the
``AttributeError`` only surfaces when the adapter actually hits its
first SDK error — long after the rename merges.
Same drift class as #2378 / #2380 / #2381 (BaseAdapter, skill_loader)
applied to the module-level function surface.
Changes:
- tests/_signature_snapshot.py gains build_module_functions_record.
Walks a module's public top-level functions, optionally filtered
to a specific name list (used here — runtime_wedge has internal
helpers like reset_for_test that intentionally aren't part of
the contract). Skips re-exports via __module__ check so a
`from foo import bar` doesn't pollute the snapshot.
- tests/test_runtime_wedge_signature.py snapshots the four
contract functions. Plus a defense-in-depth required-functions
test that catches removal even when source + snapshot are
updated together.
Verified: deliberately renaming `mark_wedged` → `mark_wedged_RENAMED`
trips the gate with full snapshot diff in the failure message.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Paired molecule-core change for the molecule-cli `molecule connect`
RFC (https://github.com/Molecule-AI/molecule-cli/issues/10).
After this PR an `external`-runtime workspace's full lifecycle
matches the operator-driven model: it boots in awaiting_agent, the
CLI connects in poll mode without operator-side flag tuning, the
heartbeat-loss path lands back on awaiting_agent (re-registrable)
instead of the terminal-feeling 'offline'.
Two changes in workspace-server:
1) `resolveDeliveryMode` (registry.go) now reads `runtime` alongside
`delivery_mode`. Resolution order:
a. payload.delivery_mode if non-empty (operator override)
b. row's existing delivery_mode if non-empty (preserves prior
registration)
c. **NEW:** "poll" if row.runtime = "external" — external
operators run on laptops without public HTTPS; push-mode
would hard-fail at validateAgentURL anyway. (`molecule connect`
registers without --mode and expects this default.)
d. "push" otherwise (historical default for platform-managed
runtimes — langgraph, hermes, claude-code, etc.)
2) Heartbeat-loss for external workspaces lands them in
`awaiting_agent` instead of `offline`. Two code paths:
- `liveness.go` — Redis TTL expiration. Uses a CASE expression
so the conditional is one UPDATE (no extra round-trip for
non-external runtimes, no TOCTOU between runtime read and
status write).
- `healthsweep.go::sweepStaleRemoteWorkspaces` — DB-side
last_heartbeat_at age scan. This sweep is already external-
only by query filter, so the UPDATE just hard-codes the new
status.
The Docker-side `sweepOnlineWorkspaces` keeps `offline` —
recovery there is "restart the container", not "re-register from
the operator's box".
Why awaiting_agent over offline for external:
- Matches the status the workspace was created in (workspace.go:333).
- The CLI re-registers on every invocation; awaiting_agent → online
is the natural transition. offline is a terminal-feeling status
that implies operator intervention is needed.
- An operator who closed their laptop overnight should see
awaiting_agent in canvas, not 'offline (something is wrong)'.
Test plan:
- Existing: 9 `resolveDeliveryMode` test sites updated to the new
query shape. Sqlmock now reads `delivery_mode, runtime` columns.
- New: TestRegister_ExternalRuntime_DefaultsToPoll asserts the
external→poll branch. TestRegister_NonExternalRuntime_StillDefaultsToPush
guards against the new branch overshooting (langgraph keeps push).
- Liveness: regex updated to match the CASE expression.
- Healthsweep: `TestSweepStaleRemoteWorkspaces_MarksStaleAwaitingAgent`
(renamed for grep-ability), Docker-side sweepOnlineWorkspaces test
unchanged (verified to still match `'offline'`).
- Full handlers + registry suite green under -race (12.873s + 2.264s).
No migration needed — `status` is a free-form text column; both
'offline' and 'awaiting_agent' are existing values used elsewhere
(workspace.go uses awaiting_agent on initial external creation).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two changes in one PR (tightly coupled — the second wouldn't make
sense without the first):
1. Hoist the inspect-based snapshot helpers out of
test_adapter_base_signature.py into tests/_signature_snapshot.py
so future surfaces don't copy-paste introspection logic.
- build_class_signature_record(cls): walks public methods,
unwraps static/class/abstract methods, returns a stable
{class, methods: [...]} dict.
- build_dataclass_record(cls): walks dataclass fields via
dataclasses.fields(), returns {name, frozen, fields: [...]}.
- compare_against_snapshot(actual, path): writes-on-first-run +
diff-on-drift, with both expected and actual JSON in failure
message.
test_adapter_base_signature.py is rewritten to use the helpers;
the existing snapshot file is byte-identical (no behavior change).
2. New gate: tests/test_skill_loader_signature.py covers the
public dataclasses exported from skill_loader/loader.py:
- SkillMetadata: every adapter pattern-matches on .runtime for
skill-compat filtering. Renaming this field would silently
break per-adapter skill loading — the loader still returns
objects, but adapters' `if "*" in skill.metadata.runtime`
raises AttributeError at workspace boot.
- LoadedSkill: returned in SetupResult.loaded_skills.
Includes test_snapshot_has_required_skill_metadata_fields
defense-in-depth: ensures the runtime / id / name / description
fields stay even if both source and snapshot are updated together.
Verified: deliberately renaming SkillMetadata.runtime trips the
gate with full snapshot diff in the failure message.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Follows up #2378. The BaseAdapter snapshot covers method signatures
but `adapter_base.py` also exports three public dataclasses that
form the call/return contract between the platform and every
adapter:
- SetupResult — returned by adapter._common_setup()
- AdapterConfig — passed into adapter setup hooks
- RuntimeCapabilities — returned by adapter.capabilities();
drives platform-side dispatch routing (#117)
Renaming a RuntimeCapabilities flag silently disables every
adapter's capability declaration (the platform fallback runs)
without an AttributeError to surface the breakage. That's exactly
the drift class the snapshot pattern is meant to catch.
Changes:
- _build_dataclass_snapshot walks SetupResult, AdapterConfig,
RuntimeCapabilities via dataclasses.fields(), capturing field
name + type annotation + has_default per field, plus the
@dataclass(frozen=...) flag.
- _build_full_snapshot composes method + dataclass records into
one stable JSON snapshot.
- test_snapshot_has_required_dataclass_fields — defense-in-depth
test parallel to test_snapshot_has_required_methods. Catches
field removal even when both source AND snapshot are updated
together. Required field set is intentionally short (the flags
that drive platform dispatch + the adapter-level config knobs).
Verified: deliberately renaming `provides_native_heartbeat` →
`provides_native_heartbeat_RENAMED` trips
test_base_adapter_signature_matches_snapshot with a full diff in
the failure message.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The check has been blocking the staging→main auto-promote PR (#2361)
since 2026-04-30T07:17Z with:
fatal: origin/main...<head>: no merge base
Root cause: the workflow does `git fetch origin <base> --depth=1`
which overwrites checkout@v4's full-history clone with a shallow
tip — destroying the ancestry the subsequent
`git diff origin/main...HEAD` (three-dot, merge-base form) needs.
This deadlocks every staging→main promote PR until manually fixed.
The auto-promote runs were succeeding at the gate-check phase but
the subsequent PR-merge step waited 30 min for the failing check
and timed out, skipping the publish + redeploy dispatch tail.
Fleet recovery for any production-only fix went through staging
fine but never reached main.
Fix: drop --depth=1 so the explicit fetch preserves full history.
The leading comment is updated to call out this trap so a future
maintainer doesn't re-add the flag thinking it's a perf win.
No test added: this is a workflow-config one-liner that the
existing PR check itself exercises end-to-end (the real signal is
PR #2361 going green after this lands).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Every workspace template (langgraph, claude-code, hermes, etc.)
subclasses BaseAdapter. Renaming, removing, or re-typing a method
on the base class silently breaks templates: the override stops
being recognized as an override; the old method-name's caller
silently invokes the default no-op; the new method-name is
unimplemented in templates that haven't migrated.
Recent #87 universal-runtime + #1957 recordResource refactor both
renamed/added methods. Without a frozen snapshot, the next rename
ships quietly and surfaces only when a template's CI catches the
AttributeError days later — long after the merge window for an
easy revert.
This snapshot pins BaseAdapter's public method surface against a
checked-in JSON file. Same-shape pattern as PR #2363's A2A
protocol-compat replay gate, applied to a Python public-API
surface instead of JSON message shapes. Both close drift classes
by snapshotting the structural surface that consumers depend on.
Two tests:
1. test_base_adapter_signature_matches_snapshot — full
introspection diff against tests/snapshots/adapter_base_signature.json.
Drift = test failure with both expected + actual JSON in
the message so the reviewer sees what changed.
2. test_snapshot_has_required_methods — defense-in-depth: even
if both the source AND snapshot are updated together
(intentional API removal), this catches removal of the
short list of methods that EVERY template depends on (name,
display_name, description, capabilities, memory_filename).
Removing one of these requires explicit edit to the
`required` set with a justification.
Verified the gate fires red on a deliberate rename
(memory_filename → memory_filename_RENAMED) — failure message
shows the full snapshot diff including parameter shapes and
return annotations.
Updating the snapshot is the explicit acknowledgment that a
template-affecting API change is intentional. Reviewer of the
introducing PR sees the snapshot diff and decides whether
template repos need coordinated updates.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The helper landed in #2376 and is exercised via chat_files + registry
integration tests. Those tests conflate the helper's behavior with the
caller's response shape — a future refactor that broke the (secret,
healed, err) contract subtly (e.g. returning healed=true on a
read-success path, or swallowing a mint error) might still pass them.
Adds 4 direct sub-tests pinning each branch of the contract:
- secret already present → (s, false, nil)
- secret missing, mint succeeds → (minted, true, nil)
- secret missing, mint fails → ("", false, err)
- read fails (non-NoInboundSecret) → ("", false, err)
Each sub-case asserts the return tuple shape AND mock.ExpectationsWereMet
(for the success path) so a future helper change that skips a DB op
trips the gate immediately.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The lazy-heal-on-miss pattern landed in two places this session:
PR #2372 (chat_files.go::resolveWorkspaceForwardCreds — Upload + Download)
and PR #2375 (registry.go::Register). Both implementations did the same
thing:
read → if ErrNoInboundSecret then mint inline → return outcome
Different response-shape requirements but the same core mechanic. Three
sites' worth of drift potential: any future heal-time condition we add
(audit log, alert, secret rotation, observability) had to be applied to
each site, with partial application silently re-opening the gap.
Fix: extract readOrLazyHealInboundSecret in workspace_provision_shared.go
returning (secret, healed, err). Each caller maps the outcome to its
response shape:
- chat_files: healed=true → 503 with retry hint; err != nil → 503 with
RFC-#2312 reprovision hint
- registry: healed=true|false + err==nil → include in response;
err != nil → omit field (workspace can retry on next register)
Net effect:
- Single source of truth for the read+heal mechanic
- Response-shape decisions stay in callers (they DO differ per feature)
- Future heal-time conditions go in one place
- Behavior preserved: existing TestRegister_NoInboundSecret_LazyHeals,
TestRegister_NoInboundSecret_LazyHealMintFailureOmitsField,
TestChatUpload_NoInboundSecret_LazyHeal*,
TestChatDownload_NoInboundSecret_LazyHeal* all pass unchanged
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Pre-fix: a legacy SaaS workspace with NULL platform_inbound_secret
needed two round-trips before chat upload worked:
1. Workspace registers → response missing platform_inbound_secret
2. User attempts chat upload → chat_files lazy-heals platform-side
(RFC #2312 backfill) → 503 + retry-after
3. Workspace heartbeats → register response now includes the
freshly-minted secret → workspace writes /configs/.platform_inbound_secret
4. User retries chat upload → workspace bearer matches → 200
The platform-side lazy-heal in chat_files.go (#2366) closes the
existing-workspace gap, but the user-visible round-trip dance is
still ugly.
Fix: lazy-heal at register time too. When ReadPlatformInboundSecret
returns ErrNoInboundSecret, mint inline and include the freshly-
minted secret in the register response. Collapses the dance to a
single round-trip:
1. Workspace registers → response includes lazy-healed secret
2. User attempts chat upload → workspace bearer matches → 200
Failure model: best-effort. Mint failure logs and falls through to
omitting the field (workspace will retry on next register call).
The 200 response status is preserved — register success doesn't
hinge on the inbound-secret heal.
Tests:
- TestRegister_NoInboundSecret_LazyHeals: pins the success branch.
Mocks the UPDATE explicitly + asserts ExpectationsWereMet, so a
regression that skipped the mint would fail loudly. Replaces
the prior TestRegister_NoInboundSecret_OmitsField which
"passed" on this branch only because sqlmock-unmatched-UPDATE
coincidentally drove the omit-field error path.
- TestRegister_NoInboundSecret_LazyHealMintFailureOmitsField:
pins the failure branch — explicit UPDATE error → 200 + field
absent.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
ValidateToken, WorkspaceFromToken, and ValidateAnyToken each duplicated
the same JOIN+WHERE auth predicate:
FROM workspace_auth_tokens t
JOIN workspaces w ON w.id = t.workspace_id
WHERE t.token_hash = $1
AND t.revoked_at IS NULL
AND w.status != 'removed'
Same drift class as the SaaS provision-mint bug fixed in #2366. A
future safety addition (e.g. exclude paused workspaces from auth) had
to be applied to all three queries; a partial application would
silently re-open one auth path while closing the others.
Fix: hoist the predicate into lookupTokenByHash, which projects
(id, workspace_id) — the union of fields any caller needs. Each
public function picks what it uses:
- ValidateToken — needs both (compares workspaceID, updates last_used_at by id)
- WorkspaceFromToken — needs workspace_id
- ValidateAnyToken — needs id
The trivial perf cost of selecting one extra column per call is worth
the single-source-of-truth guarantee for the auth predicate.
Test mock updates: two upstream test files (a2a_proxy_test, middleware
wsauth_middleware_test{,_canvasorbearer_test}) had hand-typed regex
matchers and row shapes pinned to the per-function SELECT projection.
Updated to the unified shape; behavior is unchanged.
All wsauth + middleware + handlers + full-module tests green.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>