The .github→.gitea migration left 3 secret-name drifts that mean the ported workflows reference secret-store names that don't match the canonical names. Renaming the workflow refs so the upcoming secret-store PUT (#425 class-A) lands under the names the workflows actually look up: - CP_STAGING_ADMIN_TOKEN -> CP_STAGING_ADMIN_API_TOKEN (sweep-aws-secrets, sweep-cf-orphans, sweep-cf-tunnels — peers in redeploy-tenants-on-staging + continuous-synth-e2e already use the _API_TOKEN form; semantic precision wins, 3v2 caller split) - CP_PROD_ADMIN_TOKEN -> CP_ADMIN_API_TOKEN (same 3 sweep workflows — CP_ADMIN_API_TOKEN is already the canonical name for the prod variant on molecule-controlplane, and matches ops.sh's `mol_tenants` reading `CP_ADMIN_API_TOKEN` from Railway) - MOLECULE_STAGING_OPENAI_KEY -> MOLECULE_STAGING_OPENAI_API_KEY (canary-staging, continuous-synth-e2e, e2e-staging-saas — the `_KEY` vs `_API_KEY` drift; peers are MOLECULE_STAGING_ANTHROPIC_API_KEY / MOLECULE_STAGING_MINIMAX_API_KEY. Confirmed CONSUMED — langgraph + hermes runtime tests use openai/gpt-4o and check the env presence — so renamed, not deleted.) KEPT as-is (no rename): CF_ACCOUNT_ID / CF_API_TOKEN / CF_ZONE_ID — these are the documented CI-scoped duplicates of the operator-host CLOUDFLARE_* admin names; renaming would touch 3 sweep workflows for zero functional gain. Documented as CI-scoped-dup in the secrets-map follow-up. Also updated the inline `for var in ...` presence-check loops + the `required_secret_name="..."` error strings so the workflows' diagnostics match the renamed names. Sequence: this PR merges → #425 class-A PUT populates the secret store under the canonical names → the 3 schedule-only reds (canary-staging, sweep-aws-secrets, continuous-synth-e2e) go green within ~30 min → watchdog #423 auto-closes their [main-red] issues. Refs: molecule-core#425 (secret-store audit, Section D), internal#297. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| e2e | ||
| harness | ||
| ops | ||
| README.md | ||
| test_ci_required_drift.py | ||
| test_main_red_watchdog.py | ||
Tests
This repo uses the standard monorepo testing convention: unit tests live with their package, cross-component E2E tests live here.
Where to find tests
| Scope | Location |
|---|---|
| Go unit + integration (platform, CLI, handlers) | workspace-server/**/*_test.go — run with cd workspace-server && go test -race ./... |
| TypeScript unit (canvas components, hooks, store) | canvas/src/**/__tests__/ — run with cd canvas && npm test -- --run |
| TypeScript unit (MCP server handlers) | mcp-server/src/__tests__/ — run with cd mcp-server && npx jest |
| Python unit (workspace runtime, adapters) | workspace/tests/ — run with cd workspace && python3 -m pytest |
| Python unit (SDK: plugin + remote agent) | sdk/python/tests/ — run with cd sdk/python && python3 -m pytest |
| Cross-component E2E (spans platform + runtime + HTTP) | tests/e2e/ ← you are here |
Why split this way
- Go requires co-located
_test.gofiles to access unexported symbols. - Per-package test commands keep the inner loop fast — changing canvas doesn't re-run Go tests.
tests/e2e/covers scenarios that no single package owns: a full workspace lifecycle, A2A across two provisioned agents, delegation chains, bundle round-trips.
Running E2E
Every E2E script here assumes the platform is running at localhost:8080 and (where noted) provisioned agents are online. See the header comment of each .sh for specifics.
Cleaning up rogue test workspaces
If an E2E run aborts before its teardown runs (Ctrl-C, crash, CI timeout),
the platform can be left with workspaces whose config volume is stale or
empty — Docker's unless-stopped restart policy then spins those
containers in a FileNotFoundError loop. The platform's pre-flight check
(#17) marks such workspaces failed on the next restart, but a manual
cleanup is useful:
bash scripts/cleanup-rogue-workspaces.sh # deletes ws with id/name starting aaaaaaaa-, bbbbbbbb-, cccccccc-, test-ws-
MOLECULE_URL=http://host:8080 bash scripts/cleanup-rogue-workspaces.sh
The script DELETEs each matching workspace via the API and
force-removes the ws-<id[:12]> container as a belt-and-suspenders
fallback.