Closes the SaaS upload gap (#2308) with the unified architecture from RFC #2312: same code path on local Docker and SaaS, no Docker socket dependency, no `dockerCli == nil` cliff. Stacked on PR-A (#2313) + PR-B (#2314). Before: Upload → findContainer (nil in SaaS) → 503 After: Upload → resolve workspaces.url + platform_inbound_secret → stream multipart to <url>/internal/chat/uploads/ingest → forward response back unchanged Same call site whether the workspace runs on local docker-compose ("http://ws-<id>:8000") or SaaS EC2 ("https://<id>.<tenant>..."). The bug behind #2308 cannot exist by construction. Why streaming, not parse-then-re-encode: * No 50 MB intermediate buffer on the platform * Per-file size + path-safety enforcement is the workspace's job (see workspace/internal_chat_uploads.py, PR-B) * Workspace's error responses (413 with offending filename, 400 on missing files field, etc.) propagate through unchanged Changes: * workspace-server/internal/handlers/chat_files.go — Upload rewritten as a streaming HTTP proxy. Drops sanitizeFilename, copyFlatToContainer, and the entire docker-exec path. ChatFilesHandler gains an httpClient (broken out for test injection). Download stays docker-exec for now; follow-up PR will migrate it to the same shape. * workspace-server/internal/handlers/chat_files_external_test.go — deleted. Pinned the wrong-headed runtime=external 422 gate from #2309 (already reverted in #2311). Superseded by the proxy tests. * workspace-server/internal/handlers/chat_files_test.go — replaced sanitize-filename tests (now in workspace/tests/test_internal_chat_uploads.py) with sqlmock + httptest proxy tests: - 400 invalid workspace id - 404 workspace row missing - 503 platform_inbound_secret NULL (with RFC #2312 detail) - 503 workspaces.url empty - happy-path forward (asserts auth header, content-type forwarded, body streamed, response propagated back) - 413 from workspace propagated unchanged (NOT remapped to 500) - 502 on workspace unreachable (connect refused) Existing Download + ContentDisposition tests preserved. * tests/e2e/test_chat_upload_e2e.sh — single-script-everywhere E2E. Takes BASE as env (default http://localhost:8080). Creates a workspace, waits for online, mints a test token, uploads a fixture, reads it back via /chat/download, asserts content matches + bearer-required. Same script runs against staging tenants (set BASE=https://<id>.<tenant>.staging.moleculesai.app). Test plan: * go build ./... — green * go test ./internal/handlers/ ./internal/wsauth/ — green (full suite) * tests/e2e/test_chat_upload_e2e.sh against local docker-compose after PR-A + PR-B + this PR all merge — TODO before merge Refs #2312 (parent RFC), #2308 (chat upload 503 incident). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| e2e | ||
| ops | ||
| README.md | ||
Tests
This repo uses the standard monorepo testing convention: unit tests live with their package, cross-component E2E tests live here.
Where to find tests
| Scope | Location |
|---|---|
| Go unit + integration (platform, CLI, handlers) | workspace-server/**/*_test.go — run with cd workspace-server && go test -race ./... |
| TypeScript unit (canvas components, hooks, store) | canvas/src/**/__tests__/ — run with cd canvas && npm test -- --run |
| TypeScript unit (MCP server handlers) | mcp-server/src/__tests__/ — run with cd mcp-server && npx jest |
| Python unit (workspace runtime, adapters) | workspace/tests/ — run with cd workspace && python3 -m pytest |
| Python unit (SDK: plugin + remote agent) | sdk/python/tests/ — run with cd sdk/python && python3 -m pytest |
| Cross-component E2E (spans platform + runtime + HTTP) | tests/e2e/ ← you are here |
Why split this way
- Go requires co-located
_test.gofiles to access unexported symbols. - Per-package test commands keep the inner loop fast — changing canvas doesn't re-run Go tests.
tests/e2e/covers scenarios that no single package owns: a full workspace lifecycle, A2A across two provisioned agents, delegation chains, bundle round-trips.
Running E2E
Every E2E script here assumes the platform is running at localhost:8080 and (where noted) provisioned agents are online. See the header comment of each .sh for specifics.
Cleaning up rogue test workspaces
If an E2E run aborts before its teardown runs (Ctrl-C, crash, CI timeout),
the platform can be left with workspaces whose config volume is stale or
empty — Docker's unless-stopped restart policy then spins those
containers in a FileNotFoundError loop. The platform's pre-flight check
(#17) marks such workspaces failed on the next restart, but a manual
cleanup is useful:
bash scripts/cleanup-rogue-workspaces.sh # deletes ws with id/name starting aaaaaaaa-, bbbbbbbb-, cccccccc-, test-ws-
MOLECULE_URL=http://host:8080 bash scripts/cleanup-rogue-workspaces.sh
The script DELETEs each matching workspace via the API and
force-removes the ws-<id[:12]> container as a belt-and-suspenders
fallback.