Platform side (Option B):
- provisioner.go: add WriteAuthTokenToVolume() — writes .auth_token to
the Docker named volume BEFORE ContainerStart using a throwaway alpine
container, eliminating the race window where a restarted container could
read a stale token before WriteFilesToContainer writes the new one.
- workspace_provision.go: call WriteAuthTokenToVolume() in issueAndInjectToken
as a best-effort pre-write before the container starts.
Runtime side (Option A):
- heartbeat.py: on HTTPStatusError 401 from /registry/heartbeat, call
refresh_cache() to force re-read of /configs/.auth_token from disk,
then retry the heartbeat once. Fall through to normal failure tracking
if the retry also fails.
- platform_auth.py: add refresh_cache() which discards the in-process
_cached_token and calls get_token() to re-read from disk.
Together these eliminate the >1 consecutive 401 window described in
issue #1877. Pre-write (B) is the primary fix; runtime retry (A) is the
self-healing fallback for any residual race.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Pins the fix-invariants from PR #1738 (merged 2026-04-23) against
regression. Pre-fix, `CPProvisioner.Stop` and `IsRunning` both passed
the workspace UUID as the `instance_id` query param:
url := fmt.Sprintf("%s/cp/workspaces/%s?instance_id=%s",
baseURL, workspaceID, workspaceID)
^ should be the real i-* ID
AWS rejected downstream with InvalidInstanceID.Malformed, orphaned the
EC2, and the next provision hit InvalidGroup.Duplicate on the leftover
SG — full Save & Restart cascade failure.
## Tests added
- **TestStop_UsesRealInstanceIDNotWorkspaceUUID**: stub resolveInstanceID
to return an i-* ID, assert the CP request's instance_id query param
carries that i-* value (not the workspace UUID).
- **TestStop_NoInstanceIDSkipsCPCall**: empty DB lookup → no CP call at
all (idempotent). Guards against re-introducing the "call CP with ''
and let AWS reject" footgun.
- **TestIsRunning_UsesRealInstanceIDNotWorkspaceUUID**: mirror for the
/cp/workspaces/:id/status path — same bug shape.
All 3 pass on current staging (which has the fix). Reverting either
Stop or IsRunning to the pre-#1738 shape causes these to fail loud.
Extends molecule-core#1902's regression suite.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
On an Apple Silicon dev box, every `POST /workspaces` failed immediately
with:
no matching manifest for linux/arm64/v8 in the manifest list entries:
no match for platform in manifest: not found
because the GHCR workspace-template-* images ship only a linux/amd64
manifest today. `ImagePull` and `ContainerCreate` asked for the daemon's
native arch and missed. The Canvas surfaced this as
docker image "ghcr.io/molecule-ai/workspace-template-autogen:latest"
not found after pull attempt — verify GHCR visibility for autogen
— confusing because the image IS visible, just not for linux/arm64.
### Fix
Add an auto-detect helper `defaultImagePlatform()` in
`internal/provisioner/provisioner.go` that returns `"linux/amd64"` on
Apple Silicon hosts and `""` (no preference) everywhere else, with an
env override `MOLECULE_IMAGE_PLATFORM` for operators who want to pin
or disable explicitly. The result is passed to both `ImagePull`
(`PullOptions.Platform`) and `ContainerCreate` (4th arg
`*ocispec.Platform`) so the pulled amd64 manifest matches the
create-time platform spec. Docker Desktop transparently runs it
under QEMU emulation on M-series Macs — slow (2–5× native) but
functional.
SaaS production (linux/amd64 EC2, `MOLECULE_ENV=production`) never
hits the `runtime.GOARCH == "arm64"` branch, so the current behaviour
on real tenants is byte-for-byte unchanged. Opt-in escape hatch for
operators who want it off:
export MOLECULE_IMAGE_PLATFORM="" # disable auto-force
export MOLECULE_IMAGE_PLATFORM=linux/arm64 # pin alternate
`ocispec` is `github.com/opencontainers/image-spec/specs-go/v1` —
already in go.sum v1.1.1 as a transitive dependency of
`github.com/docker/docker`, not a new import.
### Tests
`internal/provisioner/platform_test.go` exercises every branch:
- `TestDefaultImagePlatform_EnvOverride_ExplicitValue` — env wins
- `TestDefaultImagePlatform_EnvOverride_EmptyValue` — empty string
disables the auto-force (operator escape hatch)
- `TestDefaultImagePlatform_AutoDetect` — linux/amd64 on arm64 Mac,
"" on every other host
- `TestParseOCIPlatform` — 7 table-driven cases covering well-formed
platforms, malformed inputs, and nil handling
### End-to-end verification
Before this commit, `POST /workspaces` on my Apple Silicon box:
workspace status transitioned: provisioning → failed (~1s)
log: image pull for ... failed: no matching manifest for linux/arm64/v8
After this commit, fresh DB + fresh platform:
workspace status transitioned: provisioning → online (~25s)
log: attempting pull (platform=linux/amd64)
pulled ghcr.io/molecule-ai/workspace-template-langgraph:latest
docker ps: ws-7aa08951-00d Up 27 seconds
The existing provisioner race-tested test suite (`go test -race
./internal/provisioner/`) still passes — the platform pointer defaults
to nil on linux/amd64 hosts, so the CI-resolved test expectations
don't change.
Closes#1875 (arm64 image blocker).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Resolves a "Save & Restart cascade" failure on SaaS tenants. Observed
2026-04-22 on hongmingwang workspace a8af9d79 after a Config-tab save:
03:13:20 workspace deprovision: TerminateInstances
InvalidInstanceID.Malformed: a8af9d79-... is malformed
03:13:21 workspace provision: CreateSecurityGroup
InvalidGroup.Duplicate: workspace-a8af9d79-394 already
exists for VPC vpc-09f85513b85d7acee
Root cause: CPProvisioner.Stop and IsRunning passed the workspace UUID
as the `instance_id` query param to CP. CP forwarded it to EC2
TerminateInstances, which rejected it (EC2 ids are i-…, not UUIDs).
The failed terminate left the workspace's SG attached → the immediate
re-provision hit InvalidGroup.Duplicate → user saw `provisioning
failed`.
Fix: both methods now call a new `resolveInstanceID` that reads
`workspaces.instance_id` from the tenant DB and passes the real EC2
id downstream. When no row / no instance_id exists, Stop is a no-op
and IsRunning returns (false, nil) so restart cascades can freshly
re-provision.
resolveInstanceID is exposed as a `var` package-level func so tests
can swap it for a pairs-map stub without standing up sqlmock — the
per-table DB scaffolding was a heavier price than the surface
warranted given these tests are about the CP HTTP flow downstream
of the lookup, not the lookup SQL itself.
Adds regression tests:
- TestStop_EmptyInstanceIDIsNoop: no DB row → no CP call
- TestIsRunning_UsesDBInstanceID: DB id round-trips to CP
- TestIsRunning_EmptyInstanceIDReturnsFalse: no instance → false/nil
Updates existing tests to assert the resolved instance_id (i-abc123
variants) instead of the previous buggy workspaceID.
After this lands, user's existing workspaces with stale instance_id
bindings still need a manual cleanup of the orphaned EC2 + SG (done
for a8af9d79 today). Future restarts use the correct id.
Co-authored-by: Hongming Wang <hongmingwang.rabbit@users.noreply.github.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Symptom (prod tenant hongmingwang, 2026-04-22):
cp provisioner: console: unexpected 401
GET /workspaces/:id/console → 502 (View Logs broken)
Root cause: the tenant's CPProvisioner.authHeaders sent the provision-
gate shared secret as the Authorization bearer for every outbound CP
call, including /cp/admin/workspaces/:id/console. But CP gates
/cp/admin/* with CP_ADMIN_API_TOKEN — a distinct secret so a
compromised tenant's provision credentials can't read other tenants'
serial console output. Bearer mismatch → 401.
Fix: split authHeaders into two methods —
- provisionAuthHeaders(): Authorization: Bearer <MOLECULE_CP_SHARED_SECRET>
for /cp/workspaces/* (Start, Stop, IsRunning)
- adminAuthHeaders(): Authorization: Bearer <CP_ADMIN_API_TOKEN>
for /cp/admin/* (GetConsoleOutput and future admin reads)
Both still send X-Molecule-Admin-Token for per-tenant identity. When
CP_ADMIN_API_TOKEN is unset (dev / self-hosted single-secret setups),
cpAdminAPIKey falls back to sharedSecret so nothing regresses.
Rollout requirement: the tenant EC2 needs CP_ADMIN_API_TOKEN in its
env — this PR wires up the code, but CP's tenant-provision path must
inject the value. Filed as follow-up; until then, operators can set
it manually on existing tenants.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Every standalone workspace-template repo now publishes to
ghcr.io/molecule-ai/workspace-template-<runtime>:latest via the
reusable publish-template-image workflow in molecule-ci (landed
today — one caller per template repo). This PR makes the
provisioner actually use those images:
- RuntimeImages map + DefaultImage switched from bare local tags
(workspace-template:<runtime>) to their GHCR equivalents.
- New ensureImageLocal step before ContainerCreate: if the image
isn't present locally, attempt `docker pull` and drain the
progress stream to completion. Best-effort — if the pull fails
(network, auth, rate limit) the subsequent ContainerCreate still
surfaces the actionable "No such image" error, now with a
GHCR-appropriate hint instead of the defunct
`bash workspace/build-all.sh <runtime>` advice.
- runtimeTagFromImage now handles both forms: legacy
`workspace-template:<runtime>` (local dev via build-all.sh /
rebuild-runtime-images.sh) and the current GHCR shape. Keeps
error hints sensible in both worlds.
- Tests cover the GHCR path for tag extraction and the new error
message shape. Legacy local tags still recognised.
Local dev path unchanged — scripts/build-images.sh and
workspace/rebuild-runtime-images.sh still produce locally-tagged
`workspace-template:<runtime>` images, and Docker's image
resolver matches them before any pull is attempted. So
contributors can keep iterating on a template repo without
round-tripping through GHCR.
Follow-on impact:
- hongmingwang.moleculesai.app (and any other tenant EC2) will
auto-pull `ghcr.io/molecule-ai/workspace-template-hermes:latest`
on the next hermes workspace provision — picking up the real
Nous hermes-agent behind the A2A bridge (template-hermes v2.1.0)
without any tenant-side rebuild step.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
PR #1229 sed command had no capture groups but used $1 in the
replacement, committing the literal string "defer func() { _ = \$1 }()"
instead of "defer func() { _ = resp.Body.Close() }()". Go does not
compile — $1 is not a valid identifier.
Fixed with: sed -i 's/defer func() { _ = \$1 }()/defer func() { _ = resp.Body.Close() }()/g'
Affected (all on origin/staging):
workspace-server/cmd/server/cp_config.go
workspace-server/internal/handlers/a2a_proxy.go
workspace-server/internal/handlers/github_token.go
workspace-server/internal/handlers/traces.go
workspace-server/internal/handlers/transcript.go
workspace-server/internal/middleware/session_auth.go
workspace-server/internal/provisioner/cp_provisioner.go (3 occurrences)
Closes: #1245
Co-authored-by: Molecule AI Core-BE <core-be@agents.moleculesai.app>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Issue #1196: golangci-lint errcheck flags bare resp.Body.Close()
calls because Body.Close() can return a non-nil error (e.g. when the
server sent fewer bytes than Content-Length). All occurrences fixed:
defer resp.Body.Close() → defer func() { _ = resp.Body.Close() }()
resp.Body.Close() → _ = resp.Body.Close()
12 files affected across all Go packages — channels, handlers,
middleware, provisioner, artifacts, and cmd. The body is already fully
consumed at each call site, so the error is always safe to discard.
🤖 Generated with [Claude Code](https://claude.ai)
Co-authored-by: Molecule AI Core-BE <core-be@agents.moleculesai.app>
Workspaces stuck in provisioning used to sit in "starting" for 10min
until the sweeper flipped them. The real signal — a runtime crash at
EC2 boot — lands on the serial console within seconds but nothing
listened. These endpoints close the loop.
1. POST /admin/workspaces/:id/bootstrap-failed
The control plane's bootstrap watcher posts here when it spots
"RUNTIME CRASHED" in ec2:GetConsoleOutput. Handler:
- UPDATEs workspaces SET status='failed' only when status was
'provisioning' (idempotent — a raced online/failed stays put)
- Stores the error + log_tail in last_sample_error so the canvas
can render the real stack trace, not a generic "timeout" string
- Broadcasts WORKSPACE_PROVISION_FAILED with source='bootstrap_watcher'
2. GET /workspaces/:id/console
Proxies to CP's new /cp/admin/workspaces/:id/console endpoint so
the tenant platform can surface EC2 serial console output without
holding AWS credentials. CPProvisioner.GetConsoleOutput is the
client; returns 501 in non-CP deployments (docker-compose dev).
Both gated by AdminAuth — CP holds the tenant ADMIN_TOKEN that the
middleware accepts on its tier 2b branch.
Tests cover: happy-path fail, already-transitioned no-op, empty id,
log_tail truncation, and the 501 fallback when no CP is wired.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
IsRunning used an unbounded json.NewDecoder(resp.Body).Decode on
CP status responses. Start already caps its body read at 64 KiB
(cp_provisioner.go:137) to defend against a misconfigured or
compromised CP streaming a huge body and exhausting memory.
IsRunning is called reactively per-request from a2a_proxy and
periodically from healthsweep, so it's a hotter path than Start
and arguably deserves the same defense more.
Adds TestIsRunning_BoundedBodyRead that serves a body padded past
the cap and asserts the decode still succeeds on the JSON prefix.
Follow-up to code-review Nit-2 on #1073.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
My #1071 made IsRunning return (false, err) on all error paths, but that
breaks a2a_proxy which depends on Docker provisioner's (true, err) contract.
Without this fix, any brief CP outage causes a2a_proxy to mark workspaces
offline and trigger restart cascades across every tenant.
Contract now matches Docker.IsRunning:
transport error → (true, err) — alive, degraded signal
non-2xx response → (true, err) — alive, degraded signal
JSON decode error → (true, err) — alive, degraded signal
2xx state!=running → (false, nil)
2xx state==running → (true, nil)
healthsweep.go is also happy with this — it skips on err regardless.
Adds TestIsRunning_ContractCompat_A2AProxy as regression guard that
asserts each error path explicitly against the a2a_proxy expectations.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Pre-existing silent-failure path: IsRunning decoded CP responses
regardless of HTTP status, so a CP 500 → empty body → State="" →
returned (false, nil). The sweeper couldn't distinguish "workspace
stopped" from "CP broken" and would leave a dead row in place.
## Fix
- Non-2xx → wrapped error, does NOT echo body (CP 5xx bodies may
contain echoed headers; leaking into logs would expose bearer)
- JSON decode error → wrapped error
- Transport error → now wrapped with "cp provisioner: status:"
prefix for easier log grepping
## Tests
+7 cases (5-status table + malformed JSON + existing transport).
IsRunning coverage 100%; overall cp_provisioner at 98%.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes review gap: pre-PR coverage on CPProvisioner was 37%.
After this commit every exported method is exercised:
- NewCPProvisioner 100%
- authHeaders 100%
- Start 91.7% (remainder: json.Marshal error
path, unreachable with fixed-type
request struct)
- Stop 100% (new — header + path + error)
- IsRunning 100% (new — 4-state matrix + auth)
- Close 100% (new — contract no-op)
New cases assert both auth headers (shared secret + admin_token) land
on every outbound request, transport failures surface clear errors
on Start/Stop, and IsRunning doesn't misreport on transport failure.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
controlplane #118 + #130 made /cp/workspaces/* require a per-tenant
admin_token header in addition to the platform-wide shared secret.
Without it, every workspace provision / deprovision / status call
now 401s.
ADMIN_TOKEN is already injected into the tenant container by the
controlplane's Secrets Manager bootstrap, so this is purely a
header-plumbing change — no new config required on the tenant side.
## Change
- CPProvisioner carries adminToken alongside sharedSecret
- New authHeaders method sets BOTH auth headers on every outbound
request (old authHeader deleted — single call site was misleading
once the semantics changed)
- Empty values on either header are no-ops so self-hosted / dev
deployments without a real CP still work
## Tests
Renamed + expanded cp_provisioner_test cases:
- TestAuthHeaders_NoopWhenBothEmpty — self-hosted path
- TestAuthHeaders_SetsBothWhenBothProvided — prod happy path
- TestAuthHeaders_OnlyAdminTokenWhenSecretEmpty — transition window
Full workspace-server suite green.
## Rollout
Next tenant provision will ship an image with this commit merged.
Existing tenants (none in prod right now — hongming was the only
one and was purged earlier today) will auto-update via the 5-min
image-pull cron.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Post-merge audit flagged cp_provisioner.go as the only new file from
the canary/C1 work without test coverage. Fills the gap:
- NewCPProvisioner_RequiresOrgID — self-hosted without MOLECULE_ORG_ID
refuses to construct (avoids silent phone-home to prod CP).
- NewCPProvisioner_FallsBackToProvisionSharedSecret — the operator
ergonomics of using one env-var name on both sides of the wire.
- AuthHeader noop + happy path — bearer only set when secret is set.
- Start_HappyPath — end-to-end POST to stubbed CP, bearer forwarded,
instance_id parsed out of response.
- Start_Non201ReturnsStructuredError — when CP returns structured
{"error":"…"}, that message surfaces to the caller.
- Start_NoStructuredErrorFallsBackToSize — regression gate for the
anti-log-leak change from PR #980: raw upstream body must NOT
appear in the error, only the byte count.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Completes the C1 integration (PR #50 on molecule-controlplane). The CP
now requires Authorization: Bearer <PROVISION_SHARED_SECRET> on all
three /cp/workspaces/* endpoints; without this change the tenant-side
Start/Stop/IsRunning calls would all 401 (or 404 when the CP's routes
refused to mount) and every workspace provision from a SaaS tenant
would silently fail.
Reads MOLECULE_CP_SHARED_SECRET, falling back to PROVISION_SHARED_SECRET
so operators can use one env-var name on both sides of the wire. Empty
value is a no-op: self-hosted deployments with no CP or a CP that
doesn't gate /cp/workspaces/* keep working as before.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two findings from the pre-launch log-scrub audit:
1. handlers/workspace_provision.go:548 logged `token[:8]` — the exact
H1 pattern that panicked on short keys. Even with a length guard,
leaking 8 chars of an auth token into centralized logs shortens the
search space for anyone who gets log-read access. Now logs only
`len(token)` as a liveness signal.
2. provisioner/cp_provisioner.go:101 fell back to logging the raw
control-plane response body when the structured {"error":"..."}
field was absent. If the CP ever echoed request headers (Authorization)
or a portion of user-data back in an error path, the bearer token
would end up in our tenant-instance logs. Now logs the byte count
only; the structured error remains in place for the happy path.
Also caps the read at 64 KiB via io.LimitReader to prevent
log-flood DoS from a compromised upstream.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- Remove compiled workspace-server/server binary from git
- Fix .gitignore, .gitattributes, .githooks/pre-commit for renamed dirs
- Fix CI workflow path filters (workspace-template → workspace)
- Replace real EC2 IP and personal slug in test_saas_tenant.sh
- Scrub molecule-controlplane references in docs
- Fix stale workspace-template/ paths in provisioner, handlers, tests
- Clean tracked Python cache files
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>