chore: promote accumulated staging fixes to main (vitest timeout, postgres host-net, +others) #99
Loading…
Reference in New Issue
Block a user
No description provided.
Delete Branch "fix/promote-vitest-postgres-fixes"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Why
Several recent fix PRs targeted base=staging instead of main (meta-bug in agent briefs — to be addressed in saved memory). Result: main is missing fixes that exist on staging:
This PR promotes the entire staging tip to main as a one-time corrective merge.
What
Merge staging into main — fast-forward where possible, merge-commit otherwise. All commits are already verified green on staging (auto-sync chain working since AUTO_SYNC_TOKEN secret refresh).
Verification
Follow-up
File saved-memory update: future agent briefs MUST explicitly state base=main when fix needs to land on main; staging-base PRs only when fix is staging-specific.
Closes the SSOT story shipped in PR-C/D: canvas now consumes the typed /chat-history endpoint instead of /activity?type=a2a_receive, and the server emits messages in display-ready chronological order so the client doesn't have to re-order them. ## Canvas (consumer migration) - loadMessagesFromDB swaps from /activity to /chat-history. - Drops type=a2a_receive + source=canvas params (server applies the filter centrally now). - Drops [...activities].reverse() — wire is already display-ready. - Drops the local INTERNAL_SELF_MESSAGE_PREFIXES constant + isInternalSelfMessage helper. Server-side IsInternalSelfMessage applies the same predicate before emitting rows. - Drops the activityRowToMessages + ActivityRowForHydration imports from historyHydration.ts. The TS parser stays in tree because message-parser.ts is still load-bearing for live A2A WebSocket messages (ChatTab.tsx:805, AgentCommsPanel.tsx, canvas-events.ts). ## Server (row-aware wire-order fix) The pre-PR-C-2 client did `[...activities].reverse()` over ROWS, then flattened each row into [user, agent] messages. The reversal was ROW-aware. After PR-C/D, the server returned a flat ChatMessage slice in `ORDER BY created_at DESC` order, with [user, agent] within each row. A naive client-side flat reverse would FLIP each pair (agent before user at same timestamp). Two ways to fix it: A) Server emits oldest-first within page; canvas does NOT reverse. B) Canvas does row-aware reversal (group by timestamp, reverse). Option A is cleaner — server owns the wire-order responsibility, every client trusts `for m of messages` to render chronologically. Server adds reverseRowChunks() that: 1. Groups consecutive same-Timestamp messages into row chunks (1-2 messages per row). 2. Reverses the chunk order (newest-row-first → oldest-row-first). 3. Flattens. Within-chunk [user, agent] order is preserved. Single-message rows (agent reply not yet recorded, attachments-only user upload) collapse to 1-element chunks and reverse correctly too. ## Tests Server: 3 new unit tests on reverseRowChunks (paired across rows, single-message rows, empty input) + 1 sqlmock integration test on List() that drives the full SQL → reverse → wire path. Mutation-tested: removed `messages = reverseRowChunks(messages)` from List(), confirmed the integration test fires red with all 4 misordered indices flagged. Restored, all 25 messagestore tests + 9 chat-history handler tests green. Canvas: 8 lazyHistory pagination tests refactored to mock /chat-history (not /activity) and assert against the new wire shape ({messages, reached_end} not raw activity rows). All 1389/1389 vitest tests green; tsc --noEmit clean. ## Three weakest spots (hostile-reviewer self-pass) 1. reverseRowChunks groups by Timestamp string equality. If two distinct rows had the SAME timestamp (legitimately possible at sub- millisecond granularity), the algorithm would treat them as one chunk and not reverse them relative to each other. Mitigated: activity_logs.created_at uses microsecond resolution; concurrent inserts at exact-same microsecond are vanishingly rare. If a collision happens, the within-chunk order is whatever the SQL returned — both rows render at the same timestamp, no user-visible misordering. 2. The pre-existing TS parser files (historyHydration.ts + message-parser.ts) stay in tree. historyHydration.ts is now dead code (no consumers post-migration); deletion is parked as a follow- up after a one-week observation window confirms no live-message consumer reaches it. 3. canvas's loadMessagesFromDB returns `resp.messages ?? []`. If the server were ever to return `null` instead of `[]` (it currently doesn't — handler defensively coerces nil to []), the nullish coalesce keeps the canvas from crashing. A stricter wire schema would assert the never-null invariant; for today's pragmatic safety, the ?? is enough. ## Security review - Untrusted input? Same as PR-C — agent JSON parsed defensively in the messagestore parser. No new exposure. - Trust boundary? Same. Canvas → /chat-history → wsAuth → messagestore. - Output sanitization? Plain text + opaque attachment URIs as before. No security-relevant changes beyond what /chat-history already exposes via PR-C. Considered, not skipped. ## Versioning / backwards compat - /activity endpoint unchanged. - /chat-history endpoint shape unchanged (still {messages, reached_end}); only the wire ORDER within a page changed (newest-first row → oldest- first row). Canvas is the only consumer in tree; no API consumers depend on the previous order. - canvas's loadMessagesFromDB call signature unchanged — internal refactor. 🤖 Generated with [Claude Code](https://claude.com/claude-code)## Symptom Canvas detail-panel "config + filesystem load" took ~20s. Reported on production hongming tenant, workspace c7c28c0b-... (Claude Code Agent T2). ## Two stacked latency sources ### 1. Server-side: per-call EIC tunnel setup (~80% of the win) `workspace-server/internal/handlers/template_files_eic.go::realWithEICTunnel` performed ssh-keygen + SendSSHPublicKey + open-tunnel + waitForPort PER call. 4 callers (read/write/list/delete) each paid the full ~3-5s setup cost even when fired back-to-back on the same workspace EC2. Fix: refcounted pool keyed on instanceID with TTL ≤ 50s (under the 60s SendSSHPublicKey grant). One tunnel serves N file ops; concurrent acquires for the same instance share the slot via a pendingSetups gate; LRU eviction caps simultaneous tracked instances at 32. Poisons entries on tunnel-fatal errors (connection refused, broken pipe, auth failed) so the next acquire builds fresh. Cleanup on panic via defer-release pattern (added after self-review caught a refcount-leak hazard). Public API unchanged — `var withEICTunnel` rebinds to `pooledWithEICTunnel` at package init, so all 4 callers inherit pooling for free. 10 unit tests pin: 4-ops-amortise (1 setup), different-instances-do-not-share, TTL eviction, poison invalidates, concurrent-acquire-single-setup, TTL=0 escape hatch, LRU eviction at cap, error classification heuristic, refcount blocks expired eviction, panic poisons entry. All green. ### 2. Canvas-side: serial fan-out + duplicate fetch (~20% of the win) `canvas/src/components/tabs/ConfigTab.tsx::loadConfig` awaited 3 independent metadata GETs (`/workspaces/{id}`, `/model`, `/provider`) serially. `AgentCardSection` fired a SECOND `/workspaces/{id}` from its own useEffect. Fix: Promise.all over the 3 metadata GETs (each leg keeps its existing .catch fallback semantics). AgentCardSection now reads `agentCard` from the canvas store (`useCanvasStore`) instead of refetching — the canvas already hydrates `node.data.agentCard` from the platform event stream. Defensive selector handles test mocks without a `nodes` array. ## Verification - `go test ./internal/handlers/` 5.07s green (full handlers package, including 10 new pool tests) - `go vet ./internal/handlers/` clean - `npx vitest run` — 1380/1380 canvas unit tests pass (2 test FILES fail on a pre-existing xyflow CSS-load issue in vitest config, unrelated to this change) - `npx tsc --noEmit` clean Live wall-time verification deferred to Phase 4 / E2E (canvas browser session required; external probe blocked by 403 since the canvas auth chain is session-cookie + Origin header, not a bearer token I can fabricate). ## Backwards compatibility API surface unchanged. All 4 EIC handler callers use the rebound var; no caller migration. Pool defaults to enabled (TTL=50s); tests can disable by setting poolTTL=0 or by overwriting withEICTunnel directly (existing stub pattern in template_files_eic_dispatch_test.go preserved). ## Hostile self-review (3 weakest spots) 1. `fnErrIndicatesTunnelFault` is a substring grep on err.Error() — the marker list is hand-curated and ssh client error formats vary across OpenSSH versions. A future ssh that reports a tunnel failure via a phrasing not in the list would NOT poison the entry → next callers reuse a dead tunnel until TTL evicts. Acceptable: TTL bounds the impact (≤50s of bad reuse), and the heuristic covers every tunnel-error shape that appears in the existing test fixtures and known incidents. 2. `acquire`'s for-loop has unbounded retry potential under pathological churn (signal closed → new acquirer → setup fails → repeat). No bounded retry counter. Today there is no test exercise for "flaky setup that succeeds-then-fails-then-succeeds"; if observability ever shows this shape, add a max-retry guard. Filed as a known limitation, not blocking. 3. The substring assertion `strings.Contains` style I used for tunnel-fault classification could false-positive on app-level error messages that happen to contain "permission denied" or "broken pipe" verbatim. The classification test covers the discriminator but only against the error shapes we know today. Acceptable: poisoning errs on the side of building fresh, which is correct-but-slightly-slow rather than incorrect. ## Phase 4 / E2E plan - Live timing of the canvas detail-panel open against a real workspace (browser session, not external probe). - Target: perceived latency under 2s on warm pool. Cold open still pays one tunnel setup (~3-5s) — the pool buys you the SECOND through Nth panel-open within the TTL window. - Memory `feedback_chase_verification_to_staging` applies — will not declare done at PR-merge; will follow through to user-visible behavior on staging. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>The molecule-ai-workspace-runtime mirror is regenerated on every runtime-v* tag from this monorepo's workspace/. Per saved memory reference_runtime_repo_is_mirror_only, mirror-guard rejects direct PRs to the mirror; edit at source. Source-side files that propagate to the mirror's published README + read by users of the in-monorepo workspace-runtime docs: - scripts/build_runtime_package.py (the README generator): * line 281 README_TEMPLATE: 'Shared workspace runtime for Molecule AI' link → Gitea * line 399 doc-link to workspace-runtime-package.md → Gitea path (with /src/branch/main/ shape) LEFT AS-IS (per Q3 audit-trail decision): * lines 379, 392 historical issue cross-refs (#2936, #2937) - workspace/build-all.sh:5 — comment block linking to template-* repos. Migrated to Gitea path-shape. - docs/workspace-runtime-package.md: * lines 101-108 adapter→repo table (8 templates, all PUBLIC on Gitea) — Gitea URLs * line 247 starter-repo link — substituted host + added inline note that starter doesn't survive the suspension migration (recreation pending; cross-link to this issue) * line 259 generic git clone command for new templates → Gitea * line 289 second starter mention — same handling as 247 Files NOT touched in this PR: - workspace/ Python source code (.py files) — those use github paths in docstrings + a few log strings; fix bundled with the cross-repo Go-module-style migration (per #37 Q5 + parked follow-ups). - 'Writing a new adapter' section's `gh repo create` command (line 254-256) — gh CLI doesn't talk to Gitea (per #45 parked follow-up). - 'Writing a new adapter' section's ghcr.io image ref (line 276) — per #46 ghcr→ECR migration (separate concern). After this PR merges to staging + a runtime-v* tag is pushed, the mirror's published README will inherit the Gitea link. Until then the mirror's README continues to reference github.com/Molecule-AI (stale but historical-marker-correct since the mirror existed pre-suspension). Refs: molecule-ai/internal#41, molecule-ai/internal#37, molecule-ai/internal#38, molecule-ai/internal#42, molecule-ai/internal#45, molecule-ai/internal#465efa92fb)f8a238df)050cb035)6946cd12)0be89053)e7660618)33327cf0)502aa082)330a5842)1f1ead18)d84d88ad)e39fc920)07bd91e4)cdbf28fd)068c9682)Promote staging→main: vitest config bump, handlers-postgres parallel-safe, e2e-api parallel-safe, eic-tunnel-pool race fix, SaaS plugin EIC + all staging-accumulated fixes. Sister-agent verified Platform (Go) green with the race fix included. 29/29 real-signal green. Approved.