Two coupled cleanups for the post-2026-05-06 stack:
#157 — drop molecule-ai-plugin-github-app-auth
============================================
The plugin injected GITHUB_TOKEN/GH_TOKEN via the App's
installation-access flow (~hourly rotation). Per-agent Gitea
identities replaced this approach after the 2026-05-06 suspension —
workspaces now provision with a per-persona Gitea PAT from .env
instead of an App-rotated token. The plugin code itself lived on
github.com/Molecule-AI/molecule-ai-plugin-github-app-auth which is
also unreachable post-suspension; checking it out at CI build time
was already failing.
Removed:
- workspace-server/cmd/server/main.go: githubappauth import + the
`if os.Getenv("GITHUB_APP_ID") != ""` block that called
BuildRegistry. gh-identity remains as the active mutator.
- workspace-server/Dockerfile + Dockerfile.tenant: COPY of the
sibling repo + the `replace github.com/Molecule-AI/molecule-ai-
plugin-github-app-auth => /plugin` directive injection.
- workspace-server/go.mod + go.sum: github-app-auth dep entry
(cleaned up by `go mod tidy`).
- 3 workflows: actions/checkout steps for the sibling plugin repo:
- .github/workflows/codeql.yml (Go matrix path)
- .github/workflows/harness-replays.yml
- .github/workflows/publish-workspace-server-image.yml
Verified `go build ./cmd/server` + `go vet ./...` pass post-removal.
#161 — swap GHCR→ECR for publish-workspace-server-image
=======================================================
Same workflow used to push to ghcr.io/molecule-ai/platform +
platform-tenant. ghcr.io/molecule-ai is gone post-suspension. The
operator's ECR org (153263036946.dkr.ecr.us-east-2.amazonaws.com/
molecule-ai/) already hosts platform-tenant + workspace-template-*
+ runner-base images and is the post-suspension SSOT for container
images. This PR aligns publish-workspace-server-image with that
stack.
- env.IMAGE_NAME + env.TENANT_IMAGE_NAME repointed to ECR URL.
- docker/login-action swapped for aws-actions/configure-aws-
credentials@v4 + aws-actions/amazon-ecr-login@v2 chain (the
standard ECR auth pattern; uses AWS_ACCESS_KEY_ID/SECRET secrets
bound to the molecule-cp IAM user).
The :staging-<sha> + :staging-latest tag policy is unchanged —
staging-CP's TENANT_IMAGE pin still points at :staging-latest, just
with the new registry prefix.
Refs molecule-core#157, #161; parallel to org-wide CI-green sweep.
PR #2906 spawned the sidecar unconditionally on every tenant boot. The
plugin's first migration runs \`CREATE EXTENSION vector\` which fails
on tenant Postgres without pgvector preinstalled — every staging
tenant redeploy aborted at the 30s health gate. CP fail-fast kept
running tenants on the prior image (no outage), but the new image
was DOA.
Caught on staging redeploy 2026-05-05 19:23 with
\`pq: extension "vector" is not available\`.
Fix: only spawn the sidecar when the operator has flipped the cutover
flag — \`MEMORY_V2_CUTOVER=true\` OR \`MEMORY_PLUGIN_URL\` is set.
* Aligns the entrypoint to the same opt-in posture wiring.go already
uses (it skips building the client when MEMORY_PLUGIN_URL is empty).
* Until cutover, the sidecar isn't even running — no migration, no
health gate, no boot-time pgvector dependency.
* Operators activating cutover already redeploy with the new env
vars set; that's when the sidecar starts. By definition they've
verified pgvector is available before flipping.
* MEMORY_PLUGIN_DISABLE=1 escape hatch preserved; harness fix#2915
becomes belt-and-suspenders (still respected).
Both Dockerfile and entrypoint-tenant.sh updated. Behavior change for
existing deployments: zero (cutover env vars still unset → sidecar
still inert, but now also not running).
Refs RFC #2728. Hotfix for #2906; supersedes the migration-path
fragility class (the sidecar isn't doing migrations on tenants that
won't use it).
Closes the gap between the merged Memory v2 code (PR #2757 wired the
client into main.go) and operator activation. Without this PR an
operator wanting to flip MEMORY_V2_CUTOVER=true had to provision a
separate memory-plugin service and point MEMORY_PLUGIN_URL at it —
extra ops surface for what the design intends to be a built-in.
What ships:
* Both Dockerfile + Dockerfile.tenant build the
cmd/memory-plugin-postgres binary into /memory-plugin.
* Entrypoints spawn the plugin in the background on :9100 BEFORE
starting the main server; wait up to 30s for /v1/health to return
200; abort boot loud if it doesn't (better to crash-loop than to
silently route cutover traffic against a dead plugin).
* Default env: MEMORY_PLUGIN_DATABASE_URL=$DATABASE_URL (share the
existing tenant Postgres — plugin's `memory_namespaces` /
`memory_records` tables coexist with platform schema, no
conflicts), MEMORY_PLUGIN_LISTEN_ADDR=:9100.
* MEMORY_PLUGIN_DISABLE=1 escape hatch for operators running the
plugin externally on a separate host.
* Platform image: plugin runs as the `platform` user (not root) via
su-exec — matches the privilege boundary the main server already
drops to. Tenant image already starts as `canvas` so the plugin
inherits non-root automatically.
What stays operator-controlled:
* MEMORY_V2_CUTOVER is NOT auto-set. Behavior change for existing
deployments: zero. The wiring at workspace-server/internal/memory/
wiring/wiring.go skips building the plugin client until the
operator opts in, so the running sidecar is a no-op for traffic
until then.
* MEMORY_PLUGIN_URL is NOT auto-set either, for the same reason —
setting it implies cutover-active intent. Operators set both on
staging first, verify a live commit/recall round-trip (closes
pending task #292), then promote to production.
Operator activation steps after this PR ships:
1. Verify pgvector extension is available on the target Postgres
(the plugin's first migration runs CREATE EXTENSION IF NOT
EXISTS vector). Railway's managed Postgres ships pgvector
available; some self-hosted operators may need to enable it.
2. Redeploy the workspace-server with this image.
3. Set MEMORY_PLUGIN_URL=http://localhost:9100 + MEMORY_V2_CUTOVER=true
in the environment (staging first).
4. Watch boot logs for "memory-plugin: ✅ sidecar healthy" and the
wiring.go cutover messages; do a live commit_memory + recall_memory
round-trip via the canvas Memory tab to verify.
5. Promote to production once staging holds for a sweep window.
Refs RFC #2728. Closes the dormant-plugin gap noted in task #294.
Closes the gap that let issue #2395 ship: redeploy-fleet workflows reported
ssm_status=Success based on SSM RPC return code alone, while EC2 tenants
silently kept serving the previous :latest digest because docker compose up
without an explicit pull is a no-op when the local tag already exists.
Wire:
- new buildinfo package exposes GitSHA, set at link time via -ldflags from
the GIT_SHA build-arg (default "dev" so test runs without ldflags fail
closed against an unset deploy)
- router exposes GET /buildinfo returning {git_sha} — public, no auth,
cheap enough to curl from CI for every tenant
- both Dockerfiles thread GIT_SHA into the Go build
- publish-workspace-server-image.yml passes GIT_SHA=github.sha for both
images
- redeploy-tenants-on-main.yml + redeploy-tenants-on-staging.yml curl each
tenant's /buildinfo after the redeploy SSM RPC and fail the workflow on
digest mismatch; staging treats both :latest and :staging-latest as
moving tags; verification is skipped only when an operator pinned a
specific tag via workflow_dispatch
Tests:
- TestGitSHA_DefaultDevSentinel pins the dev default
- TestBuildInfoEndpoint_ReturnsGitSHA pins the wire shape that the
workflow's jq lookup depends on
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Three small, non-overlapping fixes extracted from closed PR #1664:
1. canvas/src/components/ContextMenu.tsx — Replace the useMemo-over-nodes
pattern with a hashed-boolean selector (s.nodes.some(...)) so Zustand's
useSyncExternalStore snapshot comparison is stable. Resolves React
error #185 (infinite render loop). Moves the child-node list derivation
into the delete handler via getState() so the render path no longer
allocates a fresh array.
2. workspace-server/internal/handlers/a2a_proxy.go — Allow the
Docker-bridge hostname path (ws-<id>:8000) to skip the SSRF guard in
local-docker mode. Gated on !saasMode() so SaaS deployments keep the
full private-IP blocklist (a remote workspace registration can't claim
a ws-* hostname and reach a sensitive VPC IP).
3. workspace-server/Dockerfile — Add entrypoint.sh that discovers the
docker.sock GID at boot and adds the platform user to that group, then
exec's su-exec to drop privileges. Lets the platform container reach
the host docker socket without running as root.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes: #177 (CRITICAL — Dockerfile runs as root)
Dockerfiles changed:
- workspace-server/Dockerfile (platform-only): addgroup/adduser + USER platform
- workspace-server/Dockerfile.tenant (combined Go+Canvas): addgroup/adduser + USER canvas
+ chown canvas:canvas on canvas dir so non-root node process can read it
- canvas/Dockerfile (canvas standalone): addgroup/adduser + USER canvas
- workspace-server/entrypoint-tenant.sh: update header comment (no longer starts
as root; both processes now start non-root)
The entrypoint no longer needs a root→non-root handoff since both the Go
platform and Canvas node run as non-root by default. The 'canvas' user owns
/app and /platform, so volume mounts owned by the host's canvas user work
without needing a root init step.
Co-authored-by: Molecule AI CP-BE <cp-be@agents.moleculesai.app>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
The github-app-auth plugin's go.mod had a relative replace directive
(../molecule-monorepo/platform) that didn't resolve in Docker where
the plugin is at /plugin/ and the platform at /app/. This caused the
plugin's provisionhook.TokenProvider interface to come from a different
package path than the platform's, so the type assertion in
FirstTokenProvider() failed — "no token provider registered".
Fix: sed the plugin's go.mod replace to point at /app during Docker build.
Also added debug logging to GetInstallationToken for future diagnosis.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>