Three findings from re-reviewing PR #2401 with fresh eyes:
1. Critical — port binding to 0.0.0.0
compose.yml's cf-proxy bound 8080:8080 (default 0.0.0.0). The harness
uses a hardcoded ADMIN_TOKEN so anyone on the local network or VPN
could hit /workspaces with admin privileges. Switch to 127.0.0.1:8080
so admin access is loopback-only — safe for E2E and prevents the
known-token leak.
2. Required — dead code in cp-stub
peersFailureMode + __stub/mode + __stub/peers were declared with
atomic.Value setters but no handler ever READ from them. CP doesn't
host /registry/peers (the tenant does), so the toggles couldn't
drive responses. Removed the dead vars + handlers; kept
redeployFleetCalls counter and __stub/state since those have a real
consumer in the buildinfo replay.
3. Required — replay's auth-context dependency
peer-discovery-404.sh's Python eval ran a2a_client.get_peers_with_
diagnostic() against the live tenant. Without a workspace token
file, auth_headers() yields empty headers — so the helper might
exercise a 401 branch instead of the 404 branch the replay claims
to test.
Split the assertion into (a) WIRE — direct curl proves the platform
returns 404 from /registry/<unregistered>/peers — and (b) PARSE —
feed the helper a mocked 404 via httpx patches, no network/auth.
Each branch tests exactly what it claims.
Also added a graceful skip when the workspace runtime in the
current checkout pre-dates #2399 (no get_peers_with_diagnostic
yet) — replay falls back to wire-only verification with a clear
message instead of an opaque AttributeError. After #2399 lands on
staging, both branches will run.
cp-stub still builds clean. compose.yml validates. Replay's bash
syntax + Python eval both verified locally.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The harness brings up the SaaS tenant topology on localhost using the
SAME workspace-server/Dockerfile.tenant image that ships to production.
Tests run against http://harness-tenant.localhost:8080 and exercise the
same code path a real tenant takes:
client
→ cf-proxy (nginx; CF tunnel + LB header rewrites)
→ tenant (Dockerfile.tenant — combined platform + canvas)
→ cp-stub (minimal Go CP stand-in for /cp/* paths)
→ postgres + redis
Why this exists: bugs that survive `go run ./cmd/server` and ship to
prod almost always live in env-gated middleware (TenantGuard, /cp/*
proxy, canvas proxy), header rewrites, or the strict-auth / live-token
mode. The harness activates ALL of them locally so #2395 + #2397-class
bugs can be reproduced before deploy.
Phase 1 surface:
- cp-stub/main.go: minimal CP stand-in. /cp/auth/me, redeploy-fleet,
/__stub/{peers,mode,state} for replay scripts. Catch-all returns
501 with a clear message when a new CP route appears.
- cf-proxy/nginx.conf: rewrites Host to <slug>.localhost, injects
X-Forwarded-*, disables buffering to mirror CF tunnel streaming
semantics.
- compose.yml: one service per topology layer; tenant builds from
the actual production Dockerfile.tenant.
- up.sh / down.sh / seed.sh: lifecycle scripts.
- replays/peer-discovery-404.sh: reproduces #2397 + asserts the
diagnostic helper from PR #2399 surfaces "404" + "registered".
- replays/buildinfo-stale-image.sh: reproduces #2395 + asserts
/buildinfo wire shape + GIT_SHA injection from PR #2398.
- README.md: topology, quickstart, what the harness does NOT cover.
Phases 2-3 (separate PRs):
- Phase 2: convert tests/e2e/test_api.sh to target the harness URL
instead of localhost; make harness-based replays a required CI gate.
- Phase 3: config-coherence lint that diffs harness env list against
production CP's env list, fails CI on drift.
Verification:
- cp-stub builds (go build ./...).
- cp-stub responds to all stubbed endpoints (smoke-tested locally).
- compose.yml passes `docker compose config --quiet`.
- All shell scripts pass `bash -n` syntax check.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>