fix/issue173-shell-docker-push
20 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
10e510f50c |
chore: drop github-app-auth + swap GHCR→ECR (closes #157, #161)
Some checks failed
Retarget main PRs to staging / Retarget to staging (pull_request) Has been skipped
Block internal-flavored paths / Block forbidden paths (pull_request) Successful in 5s
Check merge_group trigger on required workflows / Required workflows have merge_group trigger (pull_request) Successful in 5s
CI / Detect changes (pull_request) Successful in 8s
E2E API Smoke Test / detect-changes (pull_request) Successful in 8s
E2E Staging Canvas (Playwright) / detect-changes (pull_request) Successful in 8s
Harness Replays / detect-changes (pull_request) Successful in 9s
Handlers Postgres Integration / detect-changes (pull_request) Successful in 9s
Lint curl status-code capture / Scan workflows for curl status-capture pollution (pull_request) Successful in 8s
Secret scan / Scan diff for credential-shaped strings (pull_request) Successful in 8s
Runtime PR-Built Compatibility / detect-changes (pull_request) Successful in 9s
CI / Shellcheck (E2E scripts) (pull_request) Successful in 2s
CI / Python Lint & Test (pull_request) Successful in 4s
Handlers Postgres Integration / Handlers Postgres Integration (pull_request) Successful in 5s
Runtime PR-Built Compatibility / PR-built wheel + import smoke (pull_request) Successful in 5s
CI / Canvas (Next.js) (pull_request) Successful in 17s
CI / Canvas Deploy Reminder (pull_request) Has been skipped
E2E Staging Canvas (Playwright) / Canvas tabs E2E (pull_request) Successful in 30s
Harness Replays / Harness Replays (pull_request) Failing after 32s
CodeQL / Analyze (${{ matrix.language }}) (javascript-typescript) (pull_request) Failing after 1m26s
E2E API Smoke Test / E2E API Smoke Test (pull_request) Successful in 1m21s
CodeQL / Analyze (${{ matrix.language }}) (go) (pull_request) Failing after 1m36s
CodeQL / Analyze (${{ matrix.language }}) (python) (pull_request) Failing after 1m36s
CI / Platform (Go) (pull_request) Successful in 2m18s
Two coupled cleanups for the post-2026-05-06 stack:
============================================
The plugin injected GITHUB_TOKEN/GH_TOKEN via the App's
installation-access flow (~hourly rotation). Per-agent Gitea
identities replaced this approach after the 2026-05-06 suspension —
workspaces now provision with a per-persona Gitea PAT from .env
instead of an App-rotated token. The plugin code itself lived on
github.com/Molecule-AI/molecule-ai-plugin-github-app-auth which is
also unreachable post-suspension; checking it out at CI build time
was already failing.
Removed:
- workspace-server/cmd/server/main.go: githubappauth import + the
`if os.Getenv("GITHUB_APP_ID") != ""` block that called
BuildRegistry. gh-identity remains as the active mutator.
- workspace-server/Dockerfile + Dockerfile.tenant: COPY of the
sibling repo + the `replace github.com/Molecule-AI/molecule-ai-
plugin-github-app-auth => /plugin` directive injection.
- workspace-server/go.mod + go.sum: github-app-auth dep entry
(cleaned up by `go mod tidy`).
- 3 workflows: actions/checkout steps for the sibling plugin repo:
- .github/workflows/codeql.yml (Go matrix path)
- .github/workflows/harness-replays.yml
- .github/workflows/publish-workspace-server-image.yml
Verified `go build ./cmd/server` + `go vet ./...` pass post-removal.
=======================================================
Same workflow used to push to ghcr.io/molecule-ai/platform +
platform-tenant. ghcr.io/molecule-ai is gone post-suspension. The
operator's ECR org (153263036946.dkr.ecr.us-east-2.amazonaws.com/
molecule-ai/) already hosts platform-tenant + workspace-template-*
+ runner-base images and is the post-suspension SSOT for container
images. This PR aligns publish-workspace-server-image with that
stack.
- env.IMAGE_NAME + env.TENANT_IMAGE_NAME repointed to ECR URL.
- docker/login-action swapped for aws-actions/configure-aws-
credentials@v4 + aws-actions/amazon-ecr-login@v2 chain (the
standard ECR auth pattern; uses AWS_ACCESS_KEY_ID/SECRET secrets
bound to the molecule-cp IAM user).
The :staging-<sha> + :staging-latest tag policy is unchanged —
staging-CP's TENANT_IMAGE pin still points at :staging-latest, just
with the new registry prefix.
Refs molecule-core#157, #161; parallel to org-wide CI-green sweep.
|
||
|
|
f3187ea0c1 |
fix(workspace-server): default-bind to 127.0.0.1 in dev-mode fail-open
Some checks failed
Block internal-flavored paths / Block forbidden paths (pull_request) Successful in 4s
E2E Staging Canvas (Playwright) / detect-changes (pull_request) Successful in 5s
Handlers Postgres Integration / detect-changes (pull_request) Successful in 6s
Retarget main PRs to staging / Retarget to staging (pull_request) Has been skipped
Harness Replays / detect-changes (pull_request) Successful in 5s
CI / Detect changes (pull_request) Successful in 6s
E2E API Smoke Test / detect-changes (pull_request) Successful in 5s
CI / Shellcheck (E2E scripts) (pull_request) Successful in 2s
Secret scan / Scan diff for credential-shaped strings (pull_request) Successful in 5s
Runtime PR-Built Compatibility / detect-changes (pull_request) Successful in 5s
CI / Canvas (Next.js) (pull_request) Successful in 5s
CI / Python Lint & Test (pull_request) Successful in 3s
E2E Staging Canvas (Playwright) / Canvas tabs E2E (pull_request) Successful in 5s
Handlers Postgres Integration / Handlers Postgres Integration (pull_request) Successful in 4s
Runtime PR-Built Compatibility / PR-built wheel + import smoke (pull_request) Successful in 4s
CI / Canvas Deploy Reminder (pull_request) Has been skipped
Harness Replays / Harness Replays (pull_request) Failing after 35s
CodeQL / Analyze (${{ matrix.language }}) (go) (pull_request) Failing after 56s
CodeQL / Analyze (${{ matrix.language }}) (javascript-typescript) (pull_request) Failing after 1m24s
CodeQL / Analyze (${{ matrix.language }}) (python) (pull_request) Failing after 1m25s
CI / Platform (Go) (pull_request) Successful in 1m48s
E2E API Smoke Test / E2E API Smoke Test (pull_request) Failing after 4m47s
In dev mode (`MOLECULE_ENV=dev|development`, `ADMIN_TOKEN` unset) the AdminAuth chain fails open by design so canvas at :3000 can call workspace-server at :8080 without a bearer token. Combined with the existing wildcard bind on `:8080`, that exposed unauthenticated `POST /workspaces` to any same-LAN peer (S-8 in the audit RFC v1). Couple the bind narrowness to the same signal that drives the auth fail-open: when `middleware.IsDevModeFailOpen()` returns true, default the listener to `127.0.0.1`. Production (`ADMIN_TOKEN` set) keeps binding to all interfaces — its auth chain is doing the work. Operators who need LAN exposure set `BIND_ADDR=<host>` explicitly. * `cmd/server/main.go` — `resolveBindHost()` precedence: BIND_ADDR explicit > IsDevModeFailOpen() loopback > "" (all interfaces). Startup log line now includes the resolved bind + dev-mode-fail-open state for post-deploy auditing. * `cmd/server/bind_test.go` — 8 t.Setenv table cases covering precedence, explicit overrides, dev/prod env words. Mutation-tested: removing the `IsDevModeFailOpen()` branch makes the dev-mode cases fail with "" vs "127.0.0.1". Refs: molecule-core#7 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
a327d207da |
feat(rfc): poll-mode chat upload — phase 3 GC sweep + observability
Phase 3 of the poll-mode chat upload rollout. Stack atop Phase 2.
The platform's pending_uploads table grows once-per-uploaded-file with
no built-in cleanup. Phase 1's hard TTL (expires_at default 24h) makes
expired rows un-fetchable but doesn't actually delete them; Phase 1's
ack stamps acked_at but leaves the row indefinitely. Without a sweep
the table grows unbounded across normal traffic.
This PR adds:
- `Storage.Sweep(ctx, ackRetention)` — a single round-trip CTE that
deletes acked rows past their retention window plus unacked rows
past expires_at. Returns `(acked, expired)` deletion counts so
Phase 3 dashboards can spot the stuck-fetch pattern (high expired,
low acked) vs healthy churn.
- `pendinguploads.StartSweeper(ctx, storage, ackRetention)` —
background goroutine that calls Sweep every 5 minutes (default).
Runs once immediately on startup so a platform restart cleans up
any rows that became eligible while we were down.
- Prometheus counters `molecule_pending_uploads_swept_total` with
`outcome={acked,expired,error}` labels. Wired into the existing
`/metrics` endpoint.
- Wired from cmd/server/main.go via supervised.RunWithRecover —
one transient panic doesn't take the platform down with it.
Defaults:
- SweepInterval = 5m (matches the dashboard refresh cadence)
- DefaultAckRetention = 1h (gives the workspace at-least-once retry
headroom in case it processed but failed to write the file before
crashing)
Test coverage: 100% on storage_test.go (extended with sweepSQL pin +
six Sweep test cases including negative-retention clamp + zero-retention
immediate-delete + DB error wrapping) and sweeper_test.go (ticker-driven
+ ctx-cancel + nil-storage + transient-error-doesn't-crash + metric
counter assertions).
Closes the third of four phases tracked on the parent RFC; phase 4 is
the staging E2E test.
|
||
|
|
7993693cf1 |
feat(delegations): wire RFC #2829 sweeper + admin routes into platform server
Activates the server-side foundation that PRs #2832, #2836, #2837 shipped without wiring (each PR landed dead code on purpose so the review surface stayed tight). ## What this PR wires up 1. router.go — registers the RFC #2829 PR-4 admin endpoints behind AdminAuth: GET /admin/delegations[?status=...&limit=N] GET /admin/delegations/stats 2. cmd/server/main.go — starts the RFC #2829 PR-3 stuck-task sweeper as a supervised goroutine alongside the existing scheduler + hibernation-monitor + image-auto-refresh: go supervised.RunWithRecover(ctx, "delegation-sweeper", delegSweeper.Start) ## What this PR does NOT do - PR-2's DELEGATION_RESULT_INBOX_PUSH flag stays default off — flip happens via env config in a follow-up after staging burn-in. - PR-5's DELEGATION_SYNC_VIA_INBOX flag stays default off — same reason. The two flags are independent; either can be flipped in isolation. - Canvas operator panel UI: this PR exposes the JSON contract; the canvas panel consumes it in a separate canvas PR. ## Coverage 2 new router gate tests in admin_delegations_route_test.go: - List endpoint requires AdminAuth (unauthenticated → 401) - Stats endpoint requires AdminAuth (unauthenticated → 401) Pattern mirrors admin_test_token_route_test.go (the IDOR-fix gate for PR #112). Catches a future router refactor that silently drops AdminAuth — operator dashboard data exposes caller_id, callee_id, and task_preview, none of which should reach unauthenticated callers. Sweeper boots as a no-op until at least one delegation row exists, so this PR is safe to land before PR-5's agent-side cutover sees production traffic. Refs RFC #2829. |
||
|
|
46731729d4 |
Memory v2 fixup Critical: wire plugin from main.go (was fully dormant)
Caught during continued review: the entire v2 plugin system shipped
in PRs #2729-#2742 + #2744-#2751 was never actually invoked because
main.go and router.go don't construct the plugin client/resolver or
attach the WithMemoryV2 / WithNamespaceCleanup hooks.
Operators setting MEMORY_PLUGIN_URL=... saw zero behavior change
because nothing read it. Every fixup we shipped (idempotency, verify
mode, expires_at validation, audit JSON, namespace cleanup, O(N)
export, boot E2E) was also dormant for the same reason.
Root cause: when a multi-handler feature lands across many PRs, none
of them are individually responsible for wiring main.go — and the
master-task-tracking issue didn't gate-check that the wiring landed.
Add main.go integration to every multi-handler RFC checklist.
What ships:
* internal/memory/wiring/wiring.go: new package that constructs the
plugin client + resolver from MEMORY_PLUGIN_URL once. Returns nil
when unset (preserves zero-config legacy behavior). Probes
/v1/health at boot but doesn't fail-closed — the MCP layer's
circuit breaker handles ongoing unavailability.
* internal/memory/wiring/wiring_test.go: 6 tests covering the
nil/non-nil bundle paths + the namespace-cleanup closure
contract (nil-safe, format-stable, failure-tolerant).
* cmd/server/main.go: imports memwiring, calls Build(db.DB) once
after WorkspaceHandler creation, attaches WithNamespaceCleanup,
threads the bundle through router.Setup.
* internal/router/router.go: Setup signature gains *memwiring.Bundle
param. Inside, attaches WithMemoryV2 to AdminMemoriesHandler and
MCPHandler when the bundle is non-nil.
After this, the v2 plugin is reachable end-to-end:
Operator sets MEMORY_PLUGIN_URL → main.Build instantiates client +
resolver → WorkspaceHandler gets cleanup hook → router wires
AdminMemoriesHandler + MCPHandler with WithMemoryV2 → MCP tool
calls (commit_memory_v2, search_memory, etc.) actually do
something → admin export/import respects MEMORY_V2_CUTOVER.
Prerequisite for #292 (staging verification) — without this, the
operator runbook's step 2 (set MEMORY_PLUGIN_URL, observe behavior)
silently no-ops.
Verified: all 9 affected test packages still green
(memory/{client,contract,e2e,namespace,pgplugin,wiring}, handlers,
router, plus the build).
|
||
|
|
18edf88d59 |
fix(sweeper): honour template-manifest provision_timeout_seconds
Real wiring gap discovered while investigating issue #2486 cluster of prod claude-code workspaces failed at exactly 10m. The runtimeProvisionTimeoutsCache (#2054 phase 2) reads runtime_config.provision_timeout_seconds from each template's config.yaml so the **canvas** spinner respects per-template timeouts — but the **sweeper** in registry/provisiontimeout.go hardcoded 10 min (claude-code) / 30 min (hermes) and never consulted the manifest. So a template that declared a longer window had a UI that waited correctly but a sweeper that killed the row at the hardcoded floor anyway. Resolution order pinned by new TestProvisioningTimeout_ManifestOverride: 1. PROVISION_TIMEOUT_SECONDS env (ops-debug global override) 2. Template manifest lookup (per-runtime, beats hermes default too) 3. Hermes default (30 min — CP bootstrap-watcher 25 min + 5 min slack) 4. DefaultProvisioningTimeout (10 min) Wiring: - registry: new RuntimeTimeoutLookup function type, threaded through StartProvisioningTimeoutSweep + sweepStuckProvisioning + the pre-existing provisioningTimeoutFor. - handlers: ProvisionTimeoutSecondsForRuntime exposes the cache's lookup as a method so main.go can pass it without breaking the handlers→registry import direction. - cmd/server/main.go: wire wh.ProvisionTimeoutSecondsForRuntime into the sweep boot. Verified: - go test -race ./... passes (every workspace-server package). - Regression-injected the lookup arm: 3 manifest-override subcases fail with the actual-vs-expected gap, confirming the new test is load-bearing. - The original two timeout tests (env-override, hermes default) keep passing — `lookup=nil` argument preserves their semantics. Operator action enabled: a template wanting a 15-min window can now just set `runtime_config.provision_timeout_seconds: 900` in its config.yaml and the sweeper honours it on the next workspace-server restart. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
235aca9908 |
fix(boot): always start health-sweep goroutine — SaaS tenants need it for external-runtime liveness
Pre-fix, cmd/server/main.go gated the entire health-sweep goroutine on `prov != nil`. On SaaS tenants (`MOLECULE_ORG_ID` set) the local Docker provisioner is never initialized — only `cpProv`. So the goroutine never started, and `sweepStaleRemoteWorkspaces` (which transitions runtime='external' workspaces from 'online' to 'awaiting_agent' when their last_heartbeat_at goes stale) never ran. Net effect on production: every external-runtime workspace on SaaS that lost its agent stayed 'online' indefinitely instead of falling back to 'awaiting_agent' (re-registrable). The drift gate (#2388) caught the migration side and #2382 fixed the SQL writes, but this orchestration-side gate slipped through both because there was no SaaS-mode E2E coverage on the heartbeat-loss → awaiting_agent transition. Caught by #2392 (live staging external-runtime regression E2E) failing at step 6 — 180s with no heartbeat, expected status=awaiting_agent, got online. Fix: drop the `if prov != nil` gate. `StartHealthSweep` already handles nil checker correctly (healthsweep.go:50-71): the Docker sweep is gated inside the loop, the remote sweep always runs. Test coverage already exists at TestStartHealthSweep_NilCheckerRunsRemoteSweep. After this lands and tenants redeploy, #2392 step 6 passes and the regression coverage closes. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
c0a5d842b4 |
feat(runtime): native_scheduler skip — primitive #3 of 6
When an adapter declares provides_native_scheduler=True (because its SDK has built-in cron / Temporal-style workflows), the platform's polling loop must skip firing schedules for that workspace — otherwise the schedule fires twice (once natively, once via platform). The native skip preserves observability (next_run_at still advances, the schedule row stays in the DB, last_run_at would still update) while moving the FIRE responsibility to the SDK. Stacked on PR #2139 (idle_timeout_override end-to-end). The RuntimeMetadata heartbeat block already carries the capability map; this PR teaches the platform how to read and act on the scheduler bit. Components: - handlers/runtime_overrides.go: extended the cache to store capability flags alongside idle timeout. Two heartbeat fields are independent — SetIdleTimeout / SetCapabilities each update one without stomping the other. Defensive copy on SetCapabilities so a caller mutating its map after the call doesn't retroactively change cached declarations. Empty entries dropped to avoid stale husks. - handlers/runtime_overrides.go: new HasCapability(workspaceID, name) + ProvidesNativeScheduler(workspaceID) — the latter is the package-level adapter the scheduler imports (avoids a handlers/scheduler import cycle). - handlers/registry.go: heartbeat handler now calls SetCapabilities in addition to SetIdleTimeout. - scheduler/scheduler.go: NativeSchedulerCheck function-pointer DI (mirrors the existing QueueDrainFunc pattern). New() leaves the field nil so existing callers preserve today's "always fire" behavior. SetNativeSchedulerCheck wires production. tick() drops workspaces declaring native ownership before goroutine fan-out; advances next_run_at so we don't tight-loop on the same row. - cmd/server/main.go: wires handlers.ProvidesNativeScheduler into the cron scheduler at server boot. Tests: Go (7 new): - SetCapabilitiesAndHas (round-trip) - per-workspace isolation (ws-a's declaration doesn't leak to ws-b) - nil/empty map clears (adapter dropping the flag restores fallback) - SetCapabilities is a defensive copy (caller mutation can't retroactively flip cached value) - SetIdleTimeout preserves capabilities and vice-versa (two-field independence) - empty entry deleted (no stale husks) - ProvidesNativeScheduler reads the same singleton heartbeat writes - SetNativeSchedulerCheck wires the function (scheduler-side) - nil-check safety contract for tick Python: no change needed — the heartbeat already serializes the full capability map via _runtime_metadata_payload (PR #2139). An adapter setting RuntimeCapabilities(provides_native_scheduler=True) automatically flows through. Verification: - 1308 / 1308 Python pytest pass (unchanged) - All Go handlers + scheduler tests pass - go build + go vet clean See project memory `project_runtime_native_pluggable.md`. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
9375e3d4ee
|
feat(workspace-server): GHCR digest watcher closes runtime CD chain (#2114)
Adds an opt-in goroutine that polls GHCR every 5 minutes for digest changes on each workspace-template-*:latest tag and invokes the same refresh logic /admin/workspace-images/refresh exposes. With this, the chain from "merge runtime PR" to "containers running new code" is fully hands-off — no operator step between auto-tag → publish-runtime → cascade → template image rebuild → host pull + recreate. Opt-in via IMAGE_AUTO_REFRESH=true. SaaS deploys whose pipeline already pulls every release should leave it off (would be redundant work); self-hosters get true zero-touch. Why a refactor of admin_workspace_images.go is in this PR: The HTTP handler held all the refresh logic inline. To share it with the new watcher without HTTP loopback, extracted WorkspaceImageService with a Refresh(ctx, runtimes, recreate) (RefreshResult, error) shape. HTTP handler is now a thin wrapper; behavior is preserved (same JSON response, same 500-on-list-failure, same per-runtime soft-fail). Watcher design notes: - Last-observed digest tracked in memory (not persisted). On boot the first observation per runtime is seed-only — no spurious refresh fires on every restart. - On Refresh error, the seen digest rolls back so the next tick retries. Without this rollback a transient Docker glitch would convince the watcher the work was done. - Per-runtime fetch errors don't block other runtimes (one template's brief 500 doesn't pause the others). - digestFetcher injection seam in tick() lets unit tests cover all bookkeeping branches without standing up an httptest GHCR server. Verified live: probed GHCR's /token + manifest HEAD against workspace-template-claude-code; got HTTP 200 + a real Docker-Content-Digest. Same calls the watcher makes. Co-authored-by: Hongming Wang <hongmingwangalt@gmail.com> Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
12c4918318 |
fix(platform): stop leaking workspace containers on delete
Symptom: deleting workspaces from the canvas marked DB rows
status='removed' but left Docker containers running indefinitely.
After a session of org imports + cancellations, we counted 10
running ws-* containers all backed by 'removed' DB rows, eating
~1100% CPU on the Docker VM.
Two compounding bugs in handlers/workspace_crud.go's delete cascade:
1. The cleanup loop used `c.Request.Context()` for the Docker
stop/remove calls. When the canvas's `api.del` resolved on the
platform's 200, gin cancelled the request ctx — and any in-flight
Docker call cancelled with `context canceled`, leaving the
container alive. Old logs:
"Delete descendant <id> volume removal warning:
... context canceled"
2. `provisioner.Stop`'s error return was discarded and `RemoveVolume`
ran unconditionally afterward. When Stop didn't actually kill the
container (transient daemon error, ctx cancellation as in #1), the
volume removal would predictably fail with "volume in use" and
the container kept running with the volume mounted. Old logs:
"Delete descendant <id> volume removal warning:
Error response from daemon: remove ... volume is in use"
Fix layered in two parts:
- workspace_crud.go: detach cleanup with `context.WithoutCancel(ctx)`
+ a 30s bounded timeout. Stop's error is now checked and on
failure we skip RemoveVolume entirely (the orphan sweeper below
catches what we deferred).
- New registry/orphan_sweeper.go: periodic reconcile pass (every 60s,
initial run on boot). Lists running ws-* containers via Docker name
filter, intersects with DB rows where status='removed', stops +
removes volumes for the leaks. Defence in depth — even a brand-new
Stop failure mode heals on the next sweep instead of leaking
forever.
Provisioner gains a tiny ListWorkspaceContainerIDPrefixes helper
that wraps ContainerList with the `name=ws-` filter; the sweeper
takes an OrphanReaper interface (matches the ContainerChecker
pattern in healthsweep.go) so unit tests don't need a real Docker
daemon.
main.go wires the sweeper alongside the existing liveness +
health-sweep + provisioning-timeout monitors, all under
supervised.RunWithRecover so a panic restarts the goroutine.
6 new sweeper tests cover the reconcile path, the
no-running-containers short-circuit, the daemon-error skip, the
Stop-failure-leaves-volume invariant (the same trap that motivated
this fix), the volume-remove-error-is-non-fatal continuation,
and the nil-reaper no-op.
Verified: full Go test suite passes; manually purged the 10 leaked
containers + their orphan volumes from the dev host with `docker
rm -f` + `docker volume rm` (one-off cleanup; the sweeper would
have caught them on the next cycle once deployed).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
ee429cfee7 |
fix(canvas,dotenv): review-driven hardening of fit gate + parser parity
Independent code review surfaced two required documentation fixes and one growth-correctness gap. All addressed here. Auto-fit gate (useCanvasViewport): The previous "subtree-grew-by-count" check missed the delete-then-add case: subtree of 6 → delete one → 5 → a different child arrives → 6 again. A length-only comparison reads no growth and the fit is skipped, leaving the new node off-screen. Switched to an id-set membership snapshot so any brand-new id forces the fit even when the count is unchanged. The gate logic is now extracted as a pure exported function `shouldFitGrowing(currentIds, prevIds, userPannedAt, lastAutoFitAt)` so the regression-prone decision can be unit-tested in isolation without standing up React Flow + DOM event refs. 8 cases cover: first-fit, empty-prior, brand-new id, status-update with user pan, no-pan-ever, pan-before-last-fit, delete-then-add same length, and shrink-only with user pan. Parser parity (dotenv.go + next.config.ts): Existing-env semantics were undocumented in both parsers. Both now explicitly note that an explicitly-set empty string (`KEY=` from the parent shell) counts as "set" — the file value does NOT backfill — matching the Go (os.LookupEnv) and Node (`process.env[k] !== undefined`) primitives. `export ` prefix uses a literal space; `export\tFOO=bar` is intentionally rejected. Added the same comment in both parsers to lock in this parity invariant since the commit message claims "if one parser changes, the other has to." Skipped (per analysis): - Drag-pan respect for left-click drag-pan during deploy. The growth-check safety net means any pan gets overridden on the next arrival anyway, which is the desired behavior for the "watch the org deploy" use case. After deploy completes, no more fit-deploying-org events fire so drag-pan works freely. - Map cleanup for lastFitSubtreeIdsRef. Per-tab session, UUID keys, tiny entries — not worth the cleanup hook. 993 canvas tests pass (8 new); Go dotenv tests pass; tsc clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
4014513b94 |
fix(dotenv): empty value with inline comment was returning the comment
The repo's own .env contains lines like
CONFIGS_DIR= # Path to workspace-configs-templates/...
where the value is empty + an inline comment. The pre-fix parser:
1. v = " # Path to ..."
2. TrimLeft → "# Path to ..."
3. Inline-comment loop looked for " #" or "\t#" — neither matches
because the leading whitespace is gone.
4. Returned the comment text as the value.
Result: os.Setenv("CONFIGS_DIR", "# Path to ...") clobbered the auto-
discovery fallback. The TemplatesHandler then opened the comment as
a directory, ReadDir errored silently, and GET /templates returned
[]. Canvas's Templates panel showed "No templates found in
workspace-configs-templates/" even though 8 valid templates existed
on disk.
Fix: strip leading whitespace from the value FIRST, then run a
position-aware comment scan that treats `#` as a comment marker iff
it's at the start of the (trimmed) value or preceded by whitespace.
A bare `#` mid-value (e.g. `KEY=token#fragment`) still survives.
Quoted-value handling moved above the comment scan so
`KEY="value # not"` keeps the `#` as part of the value — pulled the
quote-detection into the same TrimLeft-then-check shape as the bare
path. The unterminated-quote case still falls through to bare-value
handling.
Three regression tests added covering the exact .env line that
broke (`CONFIGS_DIR= # ...`), spaces-only with comment, and tab-
only with comment.
Verified end-to-end: GET /templates now returns all 8 templates.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
9a223afba1 |
fix(dotenv,socket): review-driven hardening of .env loader + WS poll
Independent code review surfaced three required fixes and one cheap optional one. All addressed here. dotenv parser: - `export FOO=bar` was parsed as key `"export FOO"` (with embedded space) and silently os.Setenv'd, so a developer pasting from a direnv `.envrc` would get junk vars. Now strips the prefix. - Quoted values weren't unwrapped: `FOO="hello world"` produced value `"hello world"` with literal quotes. Now strips one matched pair of surrounding `"` or `'`. Inside a quoted value `#` is part of the value, not a comment marker (matches godotenv convention). - UTF-8 BOM at file start (Windows editors) would have produced a first key like U+FEFF + "FOO". Now stripped via TrimPrefix. dotenv loader: - findDotEnv()'s upward walk would happily pick up `~/.env` or a sibling-repo `.env` if the binary was run from `~/Documents/other- project/`. Real foot-gun on shared dev boxes. Now gated on a monorepo sentinel: the candidate directory must contain `workspace-server/go.mod`. Falls through to "no .env found" (= pre-fix behavior) when the sentinel is absent. socket fallback poll: - startFallbackPoll() previously fired only on onclose, so the very first connect attempt — when onclose hasn't fired yet because we never had a successful onopen — left the canvas with no HTTP poll for the duration of the failing handshake (Chrome can hold a SYN-SENT WebSocket open ~75s before giving up). Now also called at the top of connect(); the timer-already-running guard makes it a no-op when one cycle later onclose calls it again. Test coverage added: export prefix, single+double quoted values, hash inside quotes preserved, unterminated quote falls back to bare value, CRLF stripping locked in, BOM stripping, and a sentinel-rejection regression test that creates a temp .env with no workspace-server sibling and asserts findDotEnv refuses to load it. Verified: 985 canvas tests + 30 dotenv subtests + 4 dotenv integration tests all pass; tsc clean; rebuilt platform from monorepo root with stripped env still loads .env (49 vars) and /workspaces returns 200. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
f8c900909e |
fix(platform): auto-load .env from CWD on startup
Local dev runs (`/tmp/molecule-server` after `go build`) used to 401 on
/workspaces the moment the DB had any workspace token in it: the binary
inherited a bare shell env with no MOLECULE_ENV, so AdminAuth's dev
fail-open branch (gated on MOLECULE_ENV=development) didn't fire.
The repo's .env already has MOLECULE_ENV=development plus DATABASE_URL,
REDIS_URL, ADMIN_TOKEN=, etc. Until now you had to `set -a && source
.env` in the launching shell — a paper cut, but worse, it's a paper
cut in EVERY automated dev workflow (IDE run configs, integration
test harnesses, the smoke-test loop in this branch's manual testing).
Fix: cmd/server now walks upward from CWD looking for a .env (capped
at 6 levels) and merges KEY=VALUE pairs into os.Environ before any
other code reads env. Already-set vars win over file values, so
docker run -e / CI exports / `KEY=val ./binary` still dominate — only
unset keys get filled in.
Why no godotenv dep: the format we use is plain KEY=VALUE with `#`
comments, no interpolation, no quoting (verified against the live
.env: 49 kv lines, zero references to ${...} or `export`). A 30-line
parser is auditable and avoids supply-chain surface.
Why it's safe in production: Dockerfile doesn't COPY .env into the
image and .env is gitignored, so prod containers have no .env on
disk to load — the function's findDotEnv() loop finds nothing and
returns silently. If an operator deliberately drops one in, the
existing-env-wins rule means container-injected env still dominates.
Verified by booting `env -i HOME=$HOME PATH=$PATH /tmp/molecule-server`
from the repo root with a stripped env: log shows
".env: /Users/.../molecule-core/.env — loaded 49, 0 already set" and
/workspaces returns 200 instead of 401.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
03e913db75 |
feat(#1957): wire gh-identity plugin into workspace-server
Ships the monorepo side of molecule-core#1957 (agent identity collapse). Companion to molecule-ai-plugin-gh-identity (new repo, merged-and-tagged separately). Changes: - manifest.json: add gh-identity plugin to Tier 1 registry - workspace-server/go.mod: require github.com/Molecule-AI/molecule-ai-plugin-gh-identity - cmd/server/main.go: build a shared provisionhook.Registry, register gh-identity first (always), then github-app-auth (gated on GITHUB_APP_ID) - workspace_provision.go: propagate workspace.Role into env["MOLECULE_AGENT_ROLE"] before calling the mutator chain, so the gh-identity plugin can see which agent is booting - provisionhook/mutator.go: add Registry.Mutators() accessor so individual-plugin registries can be merged onto a shared one at boot Boot log gains a line like: env-mutator chain: [gh-identity github-app-auth] Effect per workspace: - env contains MOLECULE_AGENT_ROLE, MOLECULE_OWNER, MOLECULE_ATTRIBUTION_BADGE, MOLECULE_GH_WRAPPER_B64, MOLECULE_GH_WRAPPER_SHA - Each workspace template's install.sh can decode + install the wrapper at /usr/local/bin/gh, intercepting @me assignment and prepending agent attribution on PR/issue creates Does not break existing workspaces — absent workspace.role, the plugin is a no-op. Absent install.sh updates in each template, the env vars are simply unused. Follow-up template PRs (hermes, claude-code, langgraph, etc.) each add ~15 lines to install.sh to decode + install the wrapper. Ref: #1957 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
732f65e8e1 |
fix(go): replace $1 literal with resp.Body.Close() in 7 files (#1247)
PR #1229 sed command had no capture groups but used $1 in the replacement, committing the literal string "defer func() { _ = \$1 }()" instead of "defer func() { _ = resp.Body.Close() }()". Go does not compile — $1 is not a valid identifier. Fixed with: sed -i 's/defer func() { _ = \$1 }()/defer func() { _ = resp.Body.Close() }()/g' Affected (all on origin/staging): workspace-server/cmd/server/cp_config.go workspace-server/internal/handlers/a2a_proxy.go workspace-server/internal/handlers/github_token.go workspace-server/internal/handlers/traces.go workspace-server/internal/handlers/transcript.go workspace-server/internal/middleware/session_auth.go workspace-server/internal/provisioner/cp_provisioner.go (3 occurrences) Closes: #1245 Co-authored-by: Molecule AI Core-BE <core-be@agents.moleculesai.app> Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com> |
||
|
|
2575960805 |
fix(errcheck): suppress unchecked resp.Body.Close() across workspace-server (#1229)
Issue #1196: golangci-lint errcheck flags bare resp.Body.Close() calls because Body.Close() can return a non-nil error (e.g. when the server sent fewer bytes than Content-Length). All occurrences fixed: defer resp.Body.Close() → defer func() { _ = resp.Body.Close() }() resp.Body.Close() → _ = resp.Body.Close() 12 files affected across all Go packages — channels, handlers, middleware, provisioner, artifacts, and cmd. The body is already fully consumed at each call site, so the error is always safe to discard. 🤖 Generated with [Claude Code](https://claude.ai) Co-authored-by: Molecule AI Core-BE <core-be@agents.moleculesai.app> |
||
|
|
c3f7447e86 |
fix: harden stuck-provisioning UX — details crash, preflight, sweeper
Workspaces stuck in status='provisioning' previously surfaced in three
bad ways:
1. **Details tab crashed** with `Cannot read properties of undefined
(reading 'toLocaleString')`. `BudgetSection` + `WorkspaceUsage`
assumed full response shapes but a provisioning-stuck workspace
returns partial `{}`. Guard each deep field with `?? 0` and cover
the partial-response case with regression tests.
2. **Missing required env vars failed silently** 15+ minutes later as
a cosmetic "Provisioning Timeout" banner. The in-container preflight
catches them but by then the container has already crashed without
calling /registry/register, so the workspace sat in 'provisioning'
forever. Mirror the preflight server-side: parse config.yaml's
`runtime_config.required_env` before launch, fail fast with a
WORKSPACE_PROVISION_FAILED event naming the missing vars.
3. **No backend timeout** ever flipped a stuck workspace to 'failed'.
Add a registry sweeper (10m default, env-overridable) that detects
workspaces stuck past the window, flips them to 'failed', and emits
WORKSPACE_PROVISION_TIMEOUT. Race-safe: the UPDATE re-checks the
status + age predicate so a concurrent register/restart wins.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|
|
80caba97ee |
feat(ws-server): pull env from CP on startup
Paired with molecule-controlplane PR #55 (GET /cp/tenants/config). Lets existing tenants heal themselves when we rotate or add a CP-side env var (e.g. MOLECULE_CP_SHARED_SECRET landing earlier today) without any ssh or re-provision. Flow: main() calls refreshEnvFromCP() before any other os.Getenv read. The helper reads MOLECULE_ORG_ID + ADMIN_TOKEN from the baked-in user-data env, GETs {MOLECULE_CP_URL}/cp/tenants/config with those credentials, and applies the returned string map via os.Setenv so downstream code (CPProvisioner, etc.) sees the fresh values. Best-effort semantics: - self-hosted / no MOLECULE_ORG_ID → no-op (return nil) - CP unreachable / non-200 → log + return error (main keeps booting) - oversized values (>4 KiB each) rejected to avoid env pollution - body read capped at 64 KiB Once this image hits GHCR, the 5-minute tenant auto-updater picks it up, the container restarts, refresh runs, and every tenant has MOLECULE_CP_SHARED_SECRET within ~5 minutes — no operator toil. Also fixes workspace-server/.gitignore so `server` no longer matches the cmd/server package dir — it only ignored the compiled binary but pattern was too broad. Anchored to `/server`. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
d8026347e5 |
chore: open-source restructure — rename dirs, remove internal files, scrub secrets
Renames: - platform/ → workspace-server/ (Go module path stays as "platform" for external dep compat — will update after plugin module republish) - workspace-template/ → workspace/ Removed (moved to separate repos or deleted): - PLAN.md — internal roadmap (move to private project board) - HANDOFF.md, AGENTS.md — one-time internal session docs - .claude/ — gitignored entirely (local agent config) - infra/cloudflare-worker/ → Molecule-AI/molecule-tenant-proxy - org-templates/molecule-dev/ → standalone template repo - .mcp-eval/ → molecule-mcp-server repo - test-results/ — ephemeral, gitignored Security scrubbing: - Cloudflare account/zone/KV IDs → placeholders - Real EC2 IPs → <EC2_IP> in all docs - CF token prefix, Neon project ID, Fly app names → redacted - Langfuse dev credentials → parameterized - Personal runner username/machine name → generic Community files: - CONTRIBUTING.md — build, test, branch conventions - CODE_OF_CONDUCT.md — Contributor Covenant 2.1 All Dockerfiles, CI workflows, docker-compose, railway.toml, render.yaml, README, CLAUDE.md updated for new directory names. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |