b4eab9cef2
8 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
a6d67b4c68 |
fix(ci): pre-clone manifest deps in workflow, drop in-image clone (closes #173)
Some checks failed
Check merge_group trigger on required workflows / Required workflows have merge_group trigger (pull_request) Successful in 7s
Block internal-flavored paths / Block forbidden paths (pull_request) Successful in 7s
Retarget main PRs to staging / Retarget to staging (pull_request) Has been skipped
CI / Detect changes (pull_request) Successful in 9s
Lint curl status-code capture / Scan workflows for curl status-capture pollution (pull_request) Successful in 9s
Handlers Postgres Integration / detect-changes (pull_request) Successful in 10s
E2E API Smoke Test / detect-changes (pull_request) Successful in 10s
Harness Replays / detect-changes (pull_request) Successful in 10s
E2E Staging Canvas (Playwright) / detect-changes (pull_request) Successful in 10s
Runtime PR-Built Compatibility / detect-changes (pull_request) Successful in 10s
Secret scan / Scan diff for credential-shaped strings (pull_request) Successful in 10s
CI / Python Lint & Test (pull_request) Successful in 6s
CI / Canvas (Next.js) (pull_request) Successful in 6s
Runtime PR-Built Compatibility / PR-built wheel + import smoke (pull_request) Successful in 6s
E2E Staging Canvas (Playwright) / Canvas tabs E2E (pull_request) Successful in 8s
CI / Canvas Deploy Reminder (pull_request) Has been skipped
Handlers Postgres Integration / Handlers Postgres Integration (pull_request) Successful in 8s
CI / Shellcheck (E2E scripts) (pull_request) Successful in 13s
Ops Scripts Tests / Ops scripts (unittest) (pull_request) Successful in 34s
Harness Replays / Harness Replays (pull_request) Failing after 33s
E2E API Smoke Test / E2E API Smoke Test (pull_request) Successful in 53s
CodeQL / Analyze (${{ matrix.language }}) (go) (pull_request) Failing after 1m28s
CodeQL / Analyze (${{ matrix.language }}) (python) (pull_request) Failing after 1m29s
CodeQL / Analyze (${{ matrix.language }}) (javascript-typescript) (pull_request) Failing after 1m31s
CI / Platform (Go) (pull_request) Failing after 4m4s
publish-workspace-server-image.yml could not run on Gitea Actions because Dockerfile.tenant's stage 3 ran `git clone` against private Gitea repos from inside the Docker build context, where no auth path exists. Every workspace-server rebuild required a manual operator-host push. Move cloning to the trusted CI context (where AUTO_SYNC_TOKEN — the devops-engineer persona PAT — is naturally available). Dockerfile.tenant now COPYs from .tenant-bundle-deps/, populated by the workflow's new "Pre-clone manifest deps" step. The Gitea token never enters the image. - scripts/clone-manifest.sh: optional MOLECULE_GITEA_TOKEN env embeds basic-auth in the clone URL; redacted in log output. Anonymous fallback preserved for future public-repo path. - .github/workflows/publish-workspace-server-image.yml: new pre-clone step before docker build; injects AUTO_SYNC_TOKEN. Fail-fast if the secret is empty. - workspace-server/Dockerfile.tenant: drop stage 3 (templates), COPY from .tenant-bundle-deps/ instead. Header documents the prereq. - .gitignore: ignore /.tenant-bundle-deps/ so a local build can't accidentally commit cloned repos. Verified locally: clone-manifest.sh with the devops-engineer persona token cloned all 37 repos (9 ws + 7 org + 21 plugins, 4.9MB after .git strip). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
10e510f50c |
chore: drop github-app-auth + swap GHCR→ECR (closes #157, #161)
Some checks failed
Retarget main PRs to staging / Retarget to staging (pull_request) Has been skipped
Block internal-flavored paths / Block forbidden paths (pull_request) Successful in 5s
Check merge_group trigger on required workflows / Required workflows have merge_group trigger (pull_request) Successful in 5s
CI / Detect changes (pull_request) Successful in 8s
E2E API Smoke Test / detect-changes (pull_request) Successful in 8s
E2E Staging Canvas (Playwright) / detect-changes (pull_request) Successful in 8s
Harness Replays / detect-changes (pull_request) Successful in 9s
Handlers Postgres Integration / detect-changes (pull_request) Successful in 9s
Lint curl status-code capture / Scan workflows for curl status-capture pollution (pull_request) Successful in 8s
Secret scan / Scan diff for credential-shaped strings (pull_request) Successful in 8s
Runtime PR-Built Compatibility / detect-changes (pull_request) Successful in 9s
CI / Shellcheck (E2E scripts) (pull_request) Successful in 2s
CI / Python Lint & Test (pull_request) Successful in 4s
Handlers Postgres Integration / Handlers Postgres Integration (pull_request) Successful in 5s
Runtime PR-Built Compatibility / PR-built wheel + import smoke (pull_request) Successful in 5s
CI / Canvas (Next.js) (pull_request) Successful in 17s
CI / Canvas Deploy Reminder (pull_request) Has been skipped
E2E Staging Canvas (Playwright) / Canvas tabs E2E (pull_request) Successful in 30s
Harness Replays / Harness Replays (pull_request) Failing after 32s
CodeQL / Analyze (${{ matrix.language }}) (javascript-typescript) (pull_request) Failing after 1m26s
E2E API Smoke Test / E2E API Smoke Test (pull_request) Successful in 1m21s
CodeQL / Analyze (${{ matrix.language }}) (go) (pull_request) Failing after 1m36s
CodeQL / Analyze (${{ matrix.language }}) (python) (pull_request) Failing after 1m36s
CI / Platform (Go) (pull_request) Successful in 2m18s
Two coupled cleanups for the post-2026-05-06 stack:
============================================
The plugin injected GITHUB_TOKEN/GH_TOKEN via the App's
installation-access flow (~hourly rotation). Per-agent Gitea
identities replaced this approach after the 2026-05-06 suspension —
workspaces now provision with a per-persona Gitea PAT from .env
instead of an App-rotated token. The plugin code itself lived on
github.com/Molecule-AI/molecule-ai-plugin-github-app-auth which is
also unreachable post-suspension; checking it out at CI build time
was already failing.
Removed:
- workspace-server/cmd/server/main.go: githubappauth import + the
`if os.Getenv("GITHUB_APP_ID") != ""` block that called
BuildRegistry. gh-identity remains as the active mutator.
- workspace-server/Dockerfile + Dockerfile.tenant: COPY of the
sibling repo + the `replace github.com/Molecule-AI/molecule-ai-
plugin-github-app-auth => /plugin` directive injection.
- workspace-server/go.mod + go.sum: github-app-auth dep entry
(cleaned up by `go mod tidy`).
- 3 workflows: actions/checkout steps for the sibling plugin repo:
- .github/workflows/codeql.yml (Go matrix path)
- .github/workflows/harness-replays.yml
- .github/workflows/publish-workspace-server-image.yml
Verified `go build ./cmd/server` + `go vet ./...` pass post-removal.
=======================================================
Same workflow used to push to ghcr.io/molecule-ai/platform +
platform-tenant. ghcr.io/molecule-ai is gone post-suspension. The
operator's ECR org (153263036946.dkr.ecr.us-east-2.amazonaws.com/
molecule-ai/) already hosts platform-tenant + workspace-template-*
+ runner-base images and is the post-suspension SSOT for container
images. This PR aligns publish-workspace-server-image with that
stack.
- env.IMAGE_NAME + env.TENANT_IMAGE_NAME repointed to ECR URL.
- docker/login-action swapped for aws-actions/configure-aws-
credentials@v4 + aws-actions/amazon-ecr-login@v2 chain (the
standard ECR auth pattern; uses AWS_ACCESS_KEY_ID/SECRET secrets
bound to the molecule-cp IAM user).
The :staging-<sha> + :staging-latest tag policy is unchanged —
staging-CP's TENANT_IMAGE pin still points at :staging-latest, just
with the new registry prefix.
Refs molecule-core#157, #161; parallel to org-wide CI-green sweep.
|
||
|
|
b89a49ec93 |
feat(memory-v2): bundle memory-plugin-postgres as in-image sidecar
Closes the gap between the merged Memory v2 code (PR #2757 wired the client into main.go) and operator activation. Without this PR an operator wanting to flip MEMORY_V2_CUTOVER=true had to provision a separate memory-plugin service and point MEMORY_PLUGIN_URL at it — extra ops surface for what the design intends to be a built-in. What ships: * Both Dockerfile + Dockerfile.tenant build the cmd/memory-plugin-postgres binary into /memory-plugin. * Entrypoints spawn the plugin in the background on :9100 BEFORE starting the main server; wait up to 30s for /v1/health to return 200; abort boot loud if it doesn't (better to crash-loop than to silently route cutover traffic against a dead plugin). * Default env: MEMORY_PLUGIN_DATABASE_URL=$DATABASE_URL (share the existing tenant Postgres — plugin's `memory_namespaces` / `memory_records` tables coexist with platform schema, no conflicts), MEMORY_PLUGIN_LISTEN_ADDR=:9100. * MEMORY_PLUGIN_DISABLE=1 escape hatch for operators running the plugin externally on a separate host. * Platform image: plugin runs as the `platform` user (not root) via su-exec — matches the privilege boundary the main server already drops to. Tenant image already starts as `canvas` so the plugin inherits non-root automatically. What stays operator-controlled: * MEMORY_V2_CUTOVER is NOT auto-set. Behavior change for existing deployments: zero. The wiring at workspace-server/internal/memory/ wiring/wiring.go skips building the plugin client until the operator opts in, so the running sidecar is a no-op for traffic until then. * MEMORY_PLUGIN_URL is NOT auto-set either, for the same reason — setting it implies cutover-active intent. Operators set both on staging first, verify a live commit/recall round-trip (closes pending task #292), then promote to production. Operator activation steps after this PR ships: 1. Verify pgvector extension is available on the target Postgres (the plugin's first migration runs CREATE EXTENSION IF NOT EXISTS vector). Railway's managed Postgres ships pgvector available; some self-hosted operators may need to enable it. 2. Redeploy the workspace-server with this image. 3. Set MEMORY_PLUGIN_URL=http://localhost:9100 + MEMORY_V2_CUTOVER=true in the environment (staging first). 4. Watch boot logs for "memory-plugin: ✅ sidecar healthy" and the wiring.go cutover messages; do a live commit_memory + recall_memory round-trip via the canvas Memory tab to verify. 5. Promote to production once staging holds for a sweep window. Refs RFC #2728. Closes the dormant-plugin gap noted in task #294. |
||
|
|
998e13c4bd |
feat(deploy): verify each tenant /buildinfo matches published SHA after redeploy
Closes the gap that let issue #2395 ship: redeploy-fleet workflows reported ssm_status=Success based on SSM RPC return code alone, while EC2 tenants silently kept serving the previous :latest digest because docker compose up without an explicit pull is a no-op when the local tag already exists. Wire: - new buildinfo package exposes GitSHA, set at link time via -ldflags from the GIT_SHA build-arg (default "dev" so test runs without ldflags fail closed against an unset deploy) - router exposes GET /buildinfo returning {git_sha} — public, no auth, cheap enough to curl from CI for every tenant - both Dockerfiles thread GIT_SHA into the Go build - publish-workspace-server-image.yml passes GIT_SHA=github.sha for both images - redeploy-tenants-on-main.yml + redeploy-tenants-on-staging.yml curl each tenant's /buildinfo after the redeploy SSM RPC and fail the workflow on digest mismatch; staging treats both :latest and :staging-latest as moving tags; verification is skipped only when an operator pinned a specific tag via workflow_dispatch Tests: - TestGitSHA_DefaultDevSentinel pins the dev default - TestBuildInfoEndpoint_ReturnsGitSHA pins the wire shape that the workflow's jq lookup depends on Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
89d9470ba4 |
feat(terminal): remote path via aws ec2-instance-connect + pty
Closes the last CP-provisioned-workspace gap: Terminal tab now works for workspaces running on separate EC2 instances. Follow-up to #1531 which added instance_id persistence. How it works: - HandleConnect checks workspaces.instance_id - Empty → existing local Docker path (unchanged) - Set → spawn `aws ec2-instance-connect ssh --connection-type eice --instance-id X --os-user ec2-user -- docker exec -it ws-Y /bin/bash` under creack/pty, bridge pty ↔ canvas WebSocket Why subprocess AWS CLI instead of native AWS SDK: - EIC Endpoint tunnel needs a signed WebSocket with specific framing - aws-cli v2 implements it correctly; reimplementing in Go is ~500 lines of crypto + WS protocol work for zero user-visible benefit - Tenant image picks up 1MB of aws-cli + openssh-client via apk Handler design: - sshCommandFactory is a var so tests can stub it (no real aws calls) - Context cancellation propagates both ways (WS close → kill ssh; ssh exit → close WS) - User-visible error points at docs/infra/workspace-terminal.md when EIC wiring is incomplete (common bootstrap failure) Tests: - TestHandleConnect_RoutesToRemote — instance_id in DB → CP branch - TestHandleConnect_RoutesToLocal — empty instance_id → local branch - TestSshCommandFactory_BuildsEICCommand — argv shape regression guard Dockerfile.tenant: + openssh-client + aws-cli (Alpine main repo) Refs: #1528, #1531 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
3fe90d1a59 |
fix(tenant-image): remove node user so canvas uid 1000 can be created
node:20-alpine ships with a `node` user at uid/gid 1000. The Dockerfile tried `addgroup -g 1000 canvas` which fails with exit 1 because 1000 is already taken. Publish-workspace-server-image workflow has been red for hours — tenant image :latest stuck on a digest that predates the X-Molecule-Admin-Token CPProvisioner fix. Staging workspace provisioning 401'd because the stale tenant binary never sent the admin header. Delete node user+group first (tolerant of future base-image changes that might not ship it), then create canvas at 1000/1000 as before. Mounted volumes continue to expect uid 1000. Repro: publish-workspace-server-image workflow run 24731870797: "process addgroup -g 1000 canvas && adduser... exit code: 1". Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|
|
45f5b47487 |
fix(security): add USER directive before ENTRYPOINT in all tenant images (#1155)
Closes: #177 (CRITICAL — Dockerfile runs as root) Dockerfiles changed: - workspace-server/Dockerfile (platform-only): addgroup/adduser + USER platform - workspace-server/Dockerfile.tenant (combined Go+Canvas): addgroup/adduser + USER canvas + chown canvas:canvas on canvas dir so non-root node process can read it - canvas/Dockerfile (canvas standalone): addgroup/adduser + USER canvas - workspace-server/entrypoint-tenant.sh: update header comment (no longer starts as root; both processes now start non-root) The entrypoint no longer needs a root→non-root handoff since both the Go platform and Canvas node run as non-root by default. The 'canvas' user owns /app and /platform, so volume mounts owned by the host's canvas user work without needing a root init step. Co-authored-by: Molecule AI CP-BE <cp-be@agents.moleculesai.app> Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com> |
||
|
|
d8026347e5 |
chore: open-source restructure — rename dirs, remove internal files, scrub secrets
Renames: - platform/ → workspace-server/ (Go module path stays as "platform" for external dep compat — will update after plugin module republish) - workspace-template/ → workspace/ Removed (moved to separate repos or deleted): - PLAN.md — internal roadmap (move to private project board) - HANDOFF.md, AGENTS.md — one-time internal session docs - .claude/ — gitignored entirely (local agent config) - infra/cloudflare-worker/ → Molecule-AI/molecule-tenant-proxy - org-templates/molecule-dev/ → standalone template repo - .mcp-eval/ → molecule-mcp-server repo - test-results/ — ephemeral, gitignored Security scrubbing: - Cloudflare account/zone/KV IDs → placeholders - Real EC2 IPs → <EC2_IP> in all docs - CF token prefix, Neon project ID, Fly app names → redacted - Langfuse dev credentials → parameterized - Personal runner username/machine name → generic Community files: - CONTRIBUTING.md — build, test, branch conventions - CODE_OF_CONDUCT.md — Contributor Covenant 2.1 All Dockerfiles, CI workflows, docker-compose, railway.toml, render.yaml, README, CLAUDE.md updated for new directory names. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> |