Add 10 tests for StatusDot covering:
- All known STATUS_CONFIG statuses (online, offline, degraded,
failed, paused, not_configured, provisioning)
- Correct color class applied per status
- Glow class applied when declared in STATUS_CONFIG
- motion-safe:animate-pulse on provisioning status
- Fallback to bg-zinc-500 for unknown status
- size prop (sm/md) applies correct Tailwind dimension class
- aria-hidden="true" for accessibility tree isolation
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- Fix arrow-key nudge description: was "20px/100px" (wrong), now "10px/50px" (matches useKeyboardShortcuts)
- Add Cmd/Ctrl+Arrow resize shortcut row to dialog (missing since PR #192)
- Fix 3 tests in useKeyboardShortcuts.test.tsx that asserted shrink below min dimensions:
"resizes height down" expected height:100, clamped to 110 (node starts at minHeight)
"resizes width down" expected width:200, clamped to 210 (node starts at minWidth)
"2px step with Shift" expected height:108, clamped to 110 (minHeight wins)
All three tests updated to assert clamped values with explanatory comments.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Pins all FROM image tags to exact SHA256 digests for reproducible
builds. Without digest pinning, a registry push of a new image to the
same tag can silently change the layer content between builds — a
supply-chain risk especially for prod-deployed images.
Pinned images (7 Dockerfiles):
- golang:1.25-alpine → sha256:c4ea15b... (workspace-server/Dockerfile,
Dockerfile.dev, Dockerfile.tenant, tests/harness/cp-stub/Dockerfile)
- alpine:3.20 → sha256:c64c687c... (workspace-server/Dockerfile,
tests/harness/cp-stub/Dockerfile)
- node:20-alpine → sha256:afdf982... (workspace-server/Dockerfile.tenant)
- node:22-alpine → sha256:cb15fca... (canvas/Dockerfile)
- python:3.11-slim → sha256:e78299e... (workspace/Dockerfile)
- nginx:1.27-alpine → sha256:62223d6... (tests/harness/cf-proxy/Dockerfile)
Note: docker-compose.yml service images (postgres, redis, clickhouse,
litellm, ollama) are intentionally left on major-version tags — those
are runtime-pulled and updated regularly for local-dev ergonomics.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Add 10 tests covering the Cmd/Ctrl+Arrow resize shortcut:
- ArrowUp/Down resizes height (−/+10px)
- ArrowLeft/Right resizes width (−/+10px)
- Shift modifier uses 2px step for fine control
- min-height constraint respected when shrinking
- Guard: no-op when no node selected
- Guard: skipped when modal dialog is open
- Plain arrow keys (no modifier) fire moveNode instead
- Alt+Arrow is skipped (not a resize combo)
Also extends the mock store state with `onNodesChange` and node
`width`/`height` fields needed for the resize tests.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Replace all text-ink-soft usages across canvas components and app pages.
ink-soft (#8d92a0) on dark zinc (#0e1014) yields ~2.2:1 contrast,
failing WCAG 2.1 AA minimum of 4.5:1 for normal text.
ink-mid (#c8c2b4) on dark zinc yields ~7.6:1 — well above AA.
text-ink-mid is already the semantic token for secondary/caption text
in the warm-paper light mode; the dark-mode override was the gap.
52 files, 268 replacements. No functional change beyond contrast.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Cmd/Ctrl+Arrow Up/Down resizes node height (±10px, ±2px with Shift).
Cmd/Ctrl+Arrow Left/Right resizes node width (±10px, ±2px with Shift).
Uses the same onNodesChange('dimensions') path that NodeResizer uses
— no new store action needed. Respects min-width/min-height matching
the NodeResizer constraints (360×200 with children, 210×110 without).
The Arrow-key move shortcut now skips when a modifier key is held,
so Cmd/Ctrl+Arrow unambiguously means resize (not move).
Updates canvas audit doc: Node Rendering section updated and
the LOW node-resize item marked done. All Remaining Gaps items
are now complete.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Target handle (top of card): Enter/Space extracts this node from
its parent, moving it to the root level.
Source handle (bottom of card): Enter/Space nests the currently
selected node as a child of this node (requires another node to be
selected first).
Both handles gain tabIndex=0, role="button", a descriptive aria-label,
and a blue focus ring so keyboard-only users can navigate the
workspace hierarchy without a mouse. Uses the existing nestNode store
action — no new API surface needed.
Updates the canvas audit doc to mark the LOW edge-anchor item done.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Closes canvas audit item: MEDIUM keyboard-accessible node drag.
- Arrow keys move the selected node by 10px per press; Shift+Arrow
moves by 50px. Position is persisted to the backend via savePosition.
- The modal-dialog guard (same pattern as ? shortcut) prevents Arrow
keys from moving nodes when a modal like KeyboardShortcutsDialog is
open — dialogs own their own arrow semantics.
- All shortcuts guarded by the inInput check so Arrow keys still work
for text navigation inside inputs/textareas.
Changes:
- canvas.ts: new moveNode(dx, dy) store action — updates position
directly without the grow-parents pass that onNodesChange runs on
every drag tick (avoids edge-chase flicker).
- useKeyboardShortcuts.ts: Arrow key handler added.
- canvas.test.ts: new moveNode unit tests (position update, no-op,
savePosition call).
- useKeyboardShortcuts.test.tsx: new integration tests for all
keyboard shortcuts including the new Arrow key handlers.
- canvas-audit-items.md: Keyboard Shortcuts section upgraded to ✅,
drag item marked done.
- canvas-events.test.ts: fix pre-existing double-}); syntax error.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Closes the "no keyboard shortcut help dialog" audit gap (MEDIUM).
Changes:
- Add KeyboardShortcutsDialog component: portal-based, accessible
dialog listing all canvas + navigation + agent shortcuts grouped by
category. WCAG 2.1 compliant (focus trap, Esc close, aria-modal,
aria-labelledby, focus restoration on close).
- Add global ? shortcut: opens the dialog when pressed outside any
input field and no modal is already open.
- Add "See all shortcuts →" link in the Toolbar quick-start popup
linking to the dialog.
Test plan:
- [x] npx vitest run (182 tests pass)
- [x] tsc --noEmit (no type errors)
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Issue: HIGH priority item from canvas accessibility audit (2026-05-09).
Screen reader users had no way to know when workspace status changed
— the canvas updated visually but no announcement was made.
Changes:
- canvas.ts: add `liveAnnouncement: string` + `setLiveAnnouncement` to
CanvasState so the store can hold the current announcement text.
- canvas-events.ts: set `liveAnnouncement` in handleCanvasEvent for 6
key status transitions: ONLINE, OFFLINE, PAUSED, DEGRADED, PROVISIONING,
REMOVED, PROVISION_FAILED. Names are looked up from store nodes so
announcements are human-readable ("Alpha is now online" not "ws-1").
TASK_UPDATED and AGENT_MESSAGE are intentionally excluded — they fire
on every heartbeat and would overwhelm the user.
- Canvas.tsx: subscribe to `liveAnnouncement` from the store; render a
visually-hidden `aria-live="polite" aria-atomic="true"` region that
speaks the announcement then clears it after 500 ms so the same
message doesn't re-announce on re-render. Fallback still announces
workspace count on initial load.
- canvas-events.test.ts: 12 new test cases covering announcement
content for all 6 event types, empty/no-announcement cases, and
payload-name fallback when a node isn't yet in the store.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Issue #159: successful delegation responses were rendered as error
banners because extractResponseText() only handled the A2A result
format (body.result.parts[].text) but delegation.go stores
response_body as {text: "...", delegation_id: "..."}. The error
status was set when the HTTP transport failed even though the actual
agent response was received.
Fixes:
1. extractResponseText: check body.text before the result path so
delegation response_body.text is extracted correctly
2. extractResponseText: also check body.response_preview (WS event shape
from DELEGATION_COMPLETE handler)
3. GroupedCommsView: render NormalMessage when status=error but
responseText is populated (delegation succeeded, transport failed)
instead of burying the content in an error banner
Tests: 8 new cases (4 extractResponseText + 2 extractRequestText
regression + 2 render tests). 189 tests pass across 10 files.
Closes#159.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Agent Comms tab rendered outbound delegations as blank bubbles because
extractRequestText only checked the A2A JSON-RPC format
(body.params.message.parts[].text) while delegation.go stores
request_body as {"task": "...", "delegation_id": "..."}.
Fix: check body.task first for delegation activities, then fall back to
the A2A format. Add six test cases covering the delegation shape,
precedence over A2A params when both present, empty-string guard, and
non-string type guard.
Closes#158.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Closes the post-PR-#174 self-review gap: the matched-pair contract
between ADMIN_TOKEN (server-side bearer gate) and NEXT_PUBLIC_ADMIN_TOKEN
(canvas client-side bearer attach) was descriptive only, living in a
.env file comment. Future agents/devs could re-misconfigure with one
of the two unset and silently 401 — every workspace API call refused
with no actionable diagnostic.
Adds checkAdminTokenPair() to canvas/next.config.ts, run after
loadMonorepoEnv() so it sees the post-load state. Two distinct
warnings (server-set/client-unset and the inverse) so an operator can
tell which half is missing without grep'ing. Empty string is treated
as unset so KEY= and unset KEY produce the same verdict.
Warn-only, not exit — production canvas Docker images bake these vars
at image-build time and a hard exit would turn a recoverable auth
issue into a crashloop. The console.error fires in `next dev`, the
standalone server's stdout, and the canvas Docker container logs —
the three places an operator looks when "everything 401s."
Tests pin exact stderr strings (per feedback_assert_exact_not_substring)
across 6 cases: both unset, both set, ADMIN_TOKEN-only, NEXT_PUBLIC-only,
empty-string-as-unset, and the empty-string-asymmetric mismatch.
Mutation-tested: flipping the if-condition from === to !== fails all 6.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The forks pool's implicit maxWorkers=1 (2-CPU runner) was insufficient
to prevent concurrent jsdom worker cold-starts. Each jsdom worker
allocates ~30-50 MB RSS at boot; multiple workers starting simultaneously
exhaust available memory, causing 5 test files to fail with:
[vitest-pool]: Failed to start forks worker for test files ...
[vitest-pool-runner]: Timeout waiting for worker to respond
Individual jsdom test files take 12-15 s in isolation and pass cleanly.
Failures only occur when 51 files are run together through the pool.
Fix: explicitly set maxWorkers:1 so a single worker processes all files
sequentially, eliminating concurrent jsdom bootstrap memory pressure.
With this change, all 51 files pass (was 46 pass + 5 fail), and suite
duration improves from ~5070 s to ~1117 s because workers no longer
compete for resources during startup.
Ref: issue #148
Ref: vitest-pool investigation for issue #22 (canvas side)
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Replaces the legacy nohup `go run ./cmd/server` setup with a fully
containerized local stack: postgres + redis + platform + canvas, all
with `restart: unless-stopped` so they survive Mac sleep/wake and
Docker Desktop daemon restarts.
## Changes
- **docker-compose.yml**
- `restart: unless-stopped` on platform/postgres/redis
- `BIND_ADDR=0.0.0.0` for platform — the dev-mode-fail-open default
of 127.0.0.1 (PR #7) made the host unable to reach the container
even with port mapping. Container netns is already isolated, so
binding all interfaces inside is safe.
- Healthchecks switched from `wget --spider` (HEAD → 404 forever
because /health is GET-only) to `wget -qO /dev/null` (GET).
Same regression existed on canvas; fixed both.
- **workspace-server/Dockerfile.dev**
- `CGO_ENABLED=1` → `0` to match prod Dockerfile + Dockerfile.tenant.
Without this, the alpine dev image fails with "gcc: not found"
because workspace-server has no actual cgo deps but the env was
forcing the cgo build path. Closes a divergence introduced in
9d50a6da (today's air hot-reload PR).
- **canvas/Dockerfile**
- `npm install` → `npm ci --include=optional` for lockfile-exact
installs that include platform-specific @tailwindcss/oxide native
binaries. Without these, `next build` fails with "Cannot read
properties of undefined (reading 'All')" on the
`@import "tailwindcss"` directive.
- **canvas/.dockerignore** (new)
- Excludes `node_modules` and `.next` so the Dockerfile's
`COPY . .` step doesn't clobber the freshly-installed container
node_modules with the host's (potentially stale or wrong-arch)
copy. This was the actual root cause of the canvas build break.
- **workspace-server/.gitignore**
- Adds `/tmp/` for air's live-reload build cache.
## Stage A verified
```
container status restart
postgres-1 Up (healthy) unless-stopped
redis-1 Up (healthy) unless-stopped
platform-1 Up (healthy, air-mode) unless-stopped
canvas-1 Up (healthy) unless-stopped
GET :8080/health → 200
GET :3000/ → 200
DB preserved: 407 workspace rows + 5 named personas
Persona mount: 28 dirs at /etc/molecule-bootstrap/personas
```
## Stage B — N/A
This is local-dev infrastructure only. None of these files ship to
SaaS tenants — production EC2s use `Dockerfile.tenant` + `ec2.go`
user-data, not docker-compose.
## Out of scope
- The decorative-but-broken `wget --spider` healthcheck has presumably
also been silently 404'ing on prod tenants. Ship a follow-up to
audit + fix the prod path; not done here to keep the PR scoped.
- Docker Desktop "Start at login" is a per-machine GUI setting that
must be toggled manually (Settings → General).
- The legacy heartbeat-all.sh that pinged 5 persona workspaces from
the host has been deleted (~/.molecule-ai/heartbeat-all.sh).
Per Hongming: each workspace is responsible for its own heartbeat.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Class A red sweep — 3 first-tests timing out at the 5000ms default on the
self-hosted Gitea Actions Docker runner across 4 unrelated PRs (#82, #81,
#54, #53). The PRs share zero canvas/ surface — same 3 tests, same
cold-start signature, same shape on every run.
Root cause: `npx vitest run --coverage` cold-start cost (v8 coverage
instrumentation init + JSDOM bootstrap + heavy @/components/* and @/lib/*
import + first React render) consumes 5-7 seconds for the first
synchronous test in a heavyweight test file. Empirically:
ActivityTab "renders all 7 filter options" 5230ms (FAIL)
CreateWorkspaceDialog "opens the dialog ..." 6453ms (FAIL)
ConfigTab.provider "PUTs the new provider on Save" 5605ms (FAIL)
vs subsequent tests in the same files at 100-1500ms each. The component
code is correct (e.g. ActivityTab.FILTERS has 7 entries matching the
test). 1407 tests pass locally with --coverage in 9-15s; CI runs at 200s
under the same flag — the gap is import/transform/environment overhead,
not test logic.
Fix: CI-conditional `testTimeout: process.env.CI ? 30000 : 5000` in
canvas/vitest.config.ts. Local-dev sensitivity to genuine waitFor races
preserved; CI gets ~5x headroom over the worst observed first-test
(6453ms). Same shape Vitest documents at
<https://vitest.dev/config/testtimeout> and
<https://vitest.dev/guide/coverage#profiling-test-performance>.
Verification:
- Local: 5x runs of the 3 failing test files, all 74 tests green
(process.env.CI unset → 5000ms applies).
- Local: 7s sleep probe FAILS at 5000ms default and PASSES under
CI=true → ternary takes effect as written.
- Local: full canvas suite under CI=true with --coverage:
"Test Files 98 passed (98) | Tests 1407 passed (1407)".
Closes#96.
Refs: #82, #81, #54, #53.
Hostile self-review (3 weakest spots):
1. 30000ms is a guess, not a measurement. Mitigation: vitest still
emits per-test duration; a real 25s+ test will surface as a
duration regression and we dial down.
2. Doesn't fix the Docker-runner-overhead root-root-cause. True. That
is a multi-week perf project. The right trade today is unblocking 4
PRs from this single class.
3. Local-default of 5000ms means a real 8s race that flies on CI's
30000ms could pass without local sensitivity. Mitigation: dev-time
waitFor races are caught at the per-test level; suite-level cold-
start is the only legitimate >5s case here.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Stage 3 of #61 (final stage). Replaces the 5s setInterval poll with:
1. Initial bootstrap on mount + on filter-change + on workspaceId-
change (preserved from existing useEffect on loadActivities).
2. Manual Refresh button (preserved — still triggers loadActivities).
3. useSocketEvent subscription to ACTIVITY_LOGGED — every event
for THIS workspace prepends to the list, gated on the user's
autoRefresh toggle and current filter selection.
No interval poll. Steady-state HTTP traffic from this tab drops from
12 req/min (5s × 1 active workspace) to 0 outside of bootstraps and
manual refreshes. Live update latency drops from up to 5s to ~10ms.
The autoRefresh ("Live" / "Paused") toggle now gates LIVE updates
instead of polling cadence — semantically the same (paused = list
stays frozen), implementationally simpler.
The filter selection is honoured by the WS handler so a user
filtering to "Tasks" doesn't see live a2a_send rows trickle in. Same
shape the server-side `?type=<filter>` enforces on the bootstrap.
Test changes:
- 27 existing tests pass unchanged (filter / autoRefresh /
Refresh / loading / error / empty / count / row-content all
preserved)
- 7 new WS-subscription tests:
- WS push for matching workspace prepends with NO HTTP call
- WS push for different workspace ignored
- WS push respects active filter (non-matching ignored)
- WS push respects active filter (matching renders)
- WS push while autoRefresh paused ignored
- WS push for already-in-list row deduped (no double-render)
- NO 5s interval polling after mount
Mutation-tested:
- drop workspace_id filter → "different workspace" test fails
- drop autoRefresh gate → "paused" test fails
- drop filter gate → "non-matching activity_type" test fails
- drop dedup-by-id → "already in list deduped" test fails
Full canvas suite: 1396 passing, 0 failing. tsc clean.
No API or schema change. /workspaces/:id/activity HTTP endpoint
stays — used for bootstrap + manual refresh + filter-change reload.
ACTIVITY_LOGGED event shape unchanged.
Hostile self-review (three weakest spots):
1. Server-side activity_logs row UPDATES (status flips, etc.) are
not reflected post-#61 — the dedup-by-id check skips a re-fired
ACTIVITY_LOGGED for an existing row. Acceptable: activity_logs
is append-only by design (audit trail); status updates surface
as new task_update rows, not as in-place mutations. If a future
server change adds in-place updates, fire ACTIVITY_UPDATED as a
distinct event so this dedup logic stays simple.
2. WS handler is recreated on every render (filter / autoRefresh /
workspaceId state changes). useSocketEvent's ref-based pattern
keeps the bus subscription stable, but the handler closure
re-captures each render. Side effect: fine — handler call cost
is negligible.
3. The "error" filter matches activity_type === "error" (mirrors
server semantics). It does NOT match status === "error" rows
of other activity types — same as the polling version. Worth
re-evaluating in a separate PR if users expect the broader
semantic.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Stage 2 of #61. Replaces the 60s setInterval poll that fanned out
across every visible workspace fetching `?type=delegation&limit=500`
with:
1. One bootstrap fan-out on mount (or on visible-ID-set change),
same shape as before — preserves the 60-min look-back history.
2. useSocketEvent subscription to ACTIVITY_LOGGED — every event
with activity_type=delegation + method=delegate from a visible
workspace appends to a local rolling buffer, edges are re-derived
via the existing buildA2AEdges helper.
3. showA2AEdges toggle off: clears edges + buffer.
No interval poll. The visibleIdsKey selector gate that fixed the
2026-05-04 render-loop incident is preserved — peer-discovery /
status-flip writes still don't trigger a wasteful re-bootstrap.
Steady-state HTTP traffic from this overlay drops from N req/min
(N visible workspaces × 1 cycle/min) to 0 outside of mount + visible-
ID-set-change bootstraps. Live update latency drops from up to 60s
to ~10ms.
Bootstrap race-aware: any WS arrivals that landed in the buffer
during the fetch await are preserved by id-dedup-with-fetched-first
ordering. No row is double-counted; no row is lost during in-flight
updates.
Test changes:
- 27 existing tests pass unchanged (buildA2AEdges purity preserved,
component visibility/visibleIdsKey/error-swallow behaviour
preserved).
- 6 new WS-subscription tests:
- NO 60s polling after bootstrap (clock advance fires nothing)
- WS push for delegation updates edges with NO HTTP call
- WS push for non-delegation activity_type ignored
- WS push for delegate_result ignored (mirrors buildA2AEdges
method filter)
- WS push from hidden workspace ignored
- WS push while showA2AEdges=false ignored
Mutation-tested:
- drop activity_type filter → "non-delegation" test fails
- drop method===delegate filter → "delegate_result" test fails
- drop visible-ws membership filter → "hidden workspace" test fails
Full canvas suite: 1395 passing, 0 failing. tsc clean.
No API or schema change. ACTIVITY_LOGGED event shape unchanged.
The /workspaces/:id/activity HTTP endpoint stays — used for bootstrap.
Hostile self-review (three weakest spots):
1. Bootstrap fetches up to 500 rows × N workspaces. Worst-case
buffer ~3000 entries before window-prune. Acceptable: window-
prune runs on every recomputeAndPush, buildA2AEdges aggregates
to at most N² edges. Real-world usage stays well under both.
2. WS handler re-arms on every bootstrap dependency change
(visibleIds change). useSocketEvent's ref-based pattern means
the bus subscription stays stable across renders, but the
handler closure re-captures bootstrap each time. Side effect:
fine — handler invocation just calls recomputeAndPush which is
idempotent.
3. delegate_result rows arriving over WS are silently dropped.
Acceptable: the existing buildA2AEdges already filters them out
at aggregation time (avoids double-counting); pre-filtering at
the WS handler is the correct mirror — keeps the bus path and
the bootstrap path consistent.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Stage 1 of #61. Replaces the 30s setInterval poll with:
1. One bootstrap fan-out on mount (cap of 3 retained from the
2026-05-04 fix), gives the initial recent-comms window without
waiting for live events.
2. useSocketEvent subscription to ACTIVITY_LOGGED — every event
with a comm-overlay-relevant activity_type from a visible online
workspace prepends to the rendered list.
3. Re-bootstrap on visibility-toggle re-open so the snapshot is
fresh after a long collapsed period.
No interval poll. Inherits the singleton ReconnectingSocket's
reconnect / backoff / health-check guarantees via useSocketEvent.
Steady-state HTTP traffic from this overlay drops from ~6 req/min
(3 ws × 2 cycles/min) to 0 outside of mount/visibility-toggle
bootstraps. Live updates arrive within ~10ms of the server insert
instead of after up to 30s.
Test changes:
- Bootstrap fan-out cap of 3 — kept (was the cadence test's role
pre-#61)
- 30s cadence test — replaced with "no interval polling" test
that pins the absence of any cadence-driven HTTP after bootstrap
- Visibility gate test — extended to verify both: no fetches while
closed, AND re-bootstrap on re-open
- WS subscription tests (new):
- WS push extends rendered list with NO HTTP call
- WS push for offline workspace ignored
- WS push for non-comm activity_type ignored
- WS push while collapsed ignored
- non-ACTIVITY_LOGGED events ignored
Mutation-tested:
- drop visibility gate → visibility test fails
- drop activity_type filter → "non-comm activity_type" test fails
- drop workspace online-set filter → "offline workspace" test fails
Full canvas suite: 1393 passing, 0 failing. tsc clean.
No API or schema change. ACTIVITY_LOGGED event shape pinned by
existing socket-events tests.
Hostile self-review (three weakest spots):
1. Sustained WS outage shows stale comms until visibility-toggle
re-bootstrap. Acceptable: the singleton socket already auto-
reconnects and the comm overlay isn't a critical-path surface.
2. Bootstrap on visibility-toggle costs another 3 HTTP calls each
re-open. Acceptable: visibility-toggle is a deliberate user
action, not a tight loop.
3. The WS handler reads the latest `nodes` via nodesRef rather
than re-subscribing on node changes. By design — the bus
listener stays bound for the component lifetime to avoid the
"tear-down storm" pattern A2ATopologyOverlay's comment warns
about (ref-based current-state lookup, stable subscription).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the post-Task-#176 self-review gap: the bearer-token + tenant-
slug header construction was duplicated across 7 raw-fetch callsites
in the canvas (lib/api.ts request(), uploads.ts × 2, and 5 Attachment*
components). Each callsite read NEXT_PUBLIC_ADMIN_TOKEN, attached
Authorization: Bearer manually, computed getTenantSlug locally
(three of them inline-redefined it from /lib/tenant!), and attached
X-Molecule-Org-Slug. A new poller / raw-fetch added without going
through this exact recipe silently 401s against workspace-server when
ADMIN_TOKEN is set on the server side — the bug shape called out in
the original task.
Adds platformAuthHeaders() to lib/api.ts as the single source of truth
and routes all 7 raw-fetch callsites through it. Removes 4 duplicate
local getTenantSlug() copies (Image, Video, Audio, PDF, TextPreview)
that were inline-redefining what /lib/tenant.ts already exports.
Also preserves the AttachmentTextPreview off-platform branch — when
isPlatformAttachment() is false, headers is {} (no bearer leakage to
third-party URLs).
Tests:
- 6 unit tests in platform-auth-headers.test.ts covering: empty,
bearer-only, slug-only, both, empty-string-as-unset, fresh-object-
per-call. Mutation-tested: removing the bearer attach inside the
helper fails 2 of 6 tests immediately.
- All 1389 existing canvas vitest tests pass unchanged.
- npx tsc --noEmit clean.
- npm run build succeeds (canvas Next.js build).
Per feedback_assert_exact_not_substring: tests use exact toEqual()
equality, not substring/contains, so an extra-header bug also fails
the assertion. Per feedback_oss_design_philosophy: this is the
"plugin/abstract/modular/SSOT" move applied to the auth-header
construction surface — one helper, six call sites, no duplication.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Refs Task #165 (Class D AUTO_SYNC_TOKEN plumbing).
main and staging diverged after the 2026-05-06 GitHub-org suspension
because Class D / Class G / feature work landed on staging while
unrelated CI fixes (#34-47, ECR auth-inline, buildx→docker, pre-clone
manifest deps) landed straight on main. Both branches edited the
same workflow files, so every push to main triggered an Auto-sync
run that aborted at `git merge --no-ff origin/main` with 7 content
conflicts:
- .github/workflows/canary-verify.yml (URL: github.com → Gitea)
- .github/workflows/ci.yml (3 URL refs)
- .github/workflows/publish-runtime.yml (cascade: HTTP repo-dispatch
→ Gitea push)
- .github/workflows/publish-workspace-server-image.yml
(drop AWS-action steps;
ECR auth is inline)
- .github/workflows/retarget-main-to-staging.yml (URL)
- manifest.json (lowercase org slug + add
mock-bigorg from main)
- scripts/clone-manifest.sh (keep main's MOLECULE_GITEA_TOKEN
auth path + drop awk-tolower
since manifest is now lowercase)
Resolution: union — staging's post-suspension Gitea/ECR migrations win
on URL/policy edits; main's additive work (mock-bigorg manifest entry,
inline ECR auth, MOLECULE_GITEA_TOKEN basic-auth) is preserved on top.
After this lands, staging is a strict superset of main, so the next
auto-sync run on a push to main will be a clean fast-forward / no-op.
The auto-sync workflow on main also picks up staging's AUTO_SYNC_TOKEN
swap (Class D #26) for free, fixing the latent layer-2 push-auth issue.
Verified locally:
- bash -n scripts/clone-manifest.sh
- python -c 'yaml.safe_load(...)' on each touched workflow
- python -c 'json.load(open(manifest.json))' (21 plugins, 9 templates,
7 org_templates)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The GitHub org Molecule-AI was suspended on 2026-05-06; canonical SCM
is now Gitea at https://git.moleculesai.app/molecule-ai/. Stale
github.com/Molecule-AI/... URLs return 404 and break tooling that
clones / pip-installs / curls them.
This bundles all non-Go-module URL fixes for this repo into a single PR.
Go module path references (in *.go, go.mod, go.sum) are out of scope
here -- tracked separately under Task #140.
Token-auth clone URLs also flip ${GITHUB_TOKEN} -> ${GITEA_TOKEN} since
the GitHub token does not auth against Gitea.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Funding-demo Mock #1: when the canvas loads with `?purchase_success=1`,
show a centred success modal in the warm-paper theme. Auto-dismisses
after 5s; Close button + Esc + backdrop click also dismiss; URL params
are stripped on first paint so a refresh after dismiss does not
re-trigger.
Mounted in `app/layout.tsx` (not `app/page.tsx`) so the modal persists
across the canvas page-state transitions (loading → hydrated → error)
without unmounting and losing its open-state.
No real billing logic — the marketplace "Purchase" button on the
landing page redirects here with the flag; this modal is the only
thing the user sees of the "transaction".
Local-verified end-to-end via playwright (5/5 tests pass): redirect
URL shape, modal visibility, URL cleanup, close button, refresh-after-
dismiss behaviour, 5s auto-dismiss.
Pairs with the Purchase button added to landingpage Marketplace
section.
## Symptom
Canvas detail-panel "config + filesystem load" took ~20s. Reported on
production hongming tenant, workspace c7c28c0b-... (Claude Code Agent T2).
## Two stacked latency sources
### 1. Server-side: per-call EIC tunnel setup (~80% of the win)
`workspace-server/internal/handlers/template_files_eic.go::realWithEICTunnel`
performed ssh-keygen + SendSSHPublicKey + open-tunnel + waitForPort PER call.
4 callers (read/write/list/delete) each paid the full ~3-5s setup cost even
when fired back-to-back on the same workspace EC2.
Fix: refcounted pool keyed on instanceID with TTL ≤ 50s (under the 60s
SendSSHPublicKey grant). One tunnel serves N file ops; concurrent acquires
for the same instance share the slot via a pendingSetups gate; LRU eviction
caps simultaneous tracked instances at 32. Poisons entries on tunnel-fatal
errors (connection refused, broken pipe, auth failed) so the next acquire
builds fresh. Cleanup on panic via defer-release pattern (added after
self-review caught a refcount-leak hazard).
Public API unchanged — `var withEICTunnel` rebinds to `pooledWithEICTunnel`
at package init, so all 4 callers inherit pooling for free.
10 unit tests pin: 4-ops-amortise (1 setup), different-instances-do-not-share,
TTL eviction, poison invalidates, concurrent-acquire-single-setup,
TTL=0 escape hatch, LRU eviction at cap, error classification heuristic,
refcount blocks expired eviction, panic poisons entry. All green.
### 2. Canvas-side: serial fan-out + duplicate fetch (~20% of the win)
`canvas/src/components/tabs/ConfigTab.tsx::loadConfig` awaited 3 independent
metadata GETs (`/workspaces/{id}`, `/model`, `/provider`) serially.
`AgentCardSection` fired a SECOND `/workspaces/{id}` from its own useEffect.
Fix: Promise.all over the 3 metadata GETs (each leg keeps its existing
.catch fallback semantics). AgentCardSection now reads `agentCard` from
the canvas store (`useCanvasStore`) instead of refetching — the canvas
already hydrates `node.data.agentCard` from the platform event stream.
Defensive selector handles test mocks without a `nodes` array.
## Verification
- `go test ./internal/handlers/` 5.07s green (full handlers package, including
10 new pool tests)
- `go vet ./internal/handlers/` clean
- `npx vitest run` — 1380/1380 canvas unit tests pass (2 test FILES fail on
a pre-existing xyflow CSS-load issue in vitest config, unrelated to this
change)
- `npx tsc --noEmit` clean
Live wall-time verification deferred to Phase 4 / E2E (canvas browser session
required; external probe blocked by 403 since the canvas auth chain is
session-cookie + Origin header, not a bearer token I can fabricate).
## Backwards compatibility
API surface unchanged. All 4 EIC handler callers use the rebound var; no
caller migration. Pool defaults to enabled (TTL=50s); tests can disable by
setting poolTTL=0 or by overwriting withEICTunnel directly (existing stub
pattern in template_files_eic_dispatch_test.go preserved).
## Hostile self-review (3 weakest spots)
1. `fnErrIndicatesTunnelFault` is a substring grep on err.Error() — the
marker list is hand-curated and ssh client error formats vary across
OpenSSH versions. A future ssh that reports a tunnel failure via a
phrasing not in the list would NOT poison the entry → next callers reuse
a dead tunnel until TTL evicts. Acceptable: TTL bounds the impact (≤50s
of bad reuse), and the heuristic covers every tunnel-error shape that
appears in the existing test fixtures and known incidents.
2. `acquire`'s for-loop has unbounded retry potential under pathological
churn (signal closed → new acquirer → setup fails → repeat). No bounded
retry counter. Today there is no test exercise for "flaky setup that
succeeds-then-fails-then-succeeds"; if observability ever shows this
shape, add a max-retry guard. Filed as a known limitation, not blocking.
3. The substring assertion `strings.Contains` style I used for tunnel-fault
classification could false-positive on app-level error messages that
happen to contain "permission denied" or "broken pipe" verbatim. The
classification test covers the discriminator but only against the
error shapes we know today. Acceptable: poisoning errs on the side of
building fresh, which is correct-but-slightly-slow rather than incorrect.
## Phase 4 / E2E plan
- Live timing of the canvas detail-panel open against a real workspace
(browser session, not external probe).
- Target: perceived latency under 2s on warm pool. Cold open still pays
one tunnel setup (~3-5s) — the pool buys you the SECOND through Nth
panel-open within the TTL window.
- Memory `feedback_chase_verification_to_staging` applies — will not
declare done at PR-merge; will follow through to user-visible behavior
on staging.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the SSOT story shipped in PR-C/D: canvas now consumes the typed
/chat-history endpoint instead of /activity?type=a2a_receive, and the
server emits messages in display-ready chronological order so the
client doesn't have to re-order them.
## Canvas (consumer migration)
- loadMessagesFromDB swaps from /activity to /chat-history.
- Drops type=a2a_receive + source=canvas params (server applies the
filter centrally now).
- Drops [...activities].reverse() — wire is already display-ready.
- Drops the local INTERNAL_SELF_MESSAGE_PREFIXES constant +
isInternalSelfMessage helper. Server-side IsInternalSelfMessage
applies the same predicate before emitting rows.
- Drops the activityRowToMessages + ActivityRowForHydration imports
from historyHydration.ts. The TS parser stays in tree because
message-parser.ts is still load-bearing for live A2A WebSocket
messages (ChatTab.tsx:805, AgentCommsPanel.tsx, canvas-events.ts).
## Server (row-aware wire-order fix)
The pre-PR-C-2 client did `[...activities].reverse()` over ROWS, then
flattened each row into [user, agent] messages. The reversal was
ROW-aware. After PR-C/D, the server returned a flat ChatMessage slice
in `ORDER BY created_at DESC` order, with [user, agent] within each
row. A naive client-side flat reverse would FLIP each pair (agent
before user at same timestamp).
Two ways to fix it:
A) Server emits oldest-first within page; canvas does NOT reverse.
B) Canvas does row-aware reversal (group by timestamp, reverse).
Option A is cleaner — server owns the wire-order responsibility, every
client trusts `for m of messages` to render chronologically. Server
adds reverseRowChunks() that:
1. Groups consecutive same-Timestamp messages into row chunks
(1-2 messages per row).
2. Reverses the chunk order (newest-row-first → oldest-row-first).
3. Flattens. Within-chunk [user, agent] order is preserved.
Single-message rows (agent reply not yet recorded, attachments-only
user upload) collapse to 1-element chunks and reverse correctly too.
## Tests
Server: 3 new unit tests on reverseRowChunks (paired across rows,
single-message rows, empty input) + 1 sqlmock integration test on
List() that drives the full SQL → reverse → wire path. Mutation-tested:
removed `messages = reverseRowChunks(messages)` from List(), confirmed
the integration test fires red with all 4 misordered indices flagged.
Restored, all 25 messagestore tests + 9 chat-history handler tests
green.
Canvas: 8 lazyHistory pagination tests refactored to mock
/chat-history (not /activity) and assert against the new wire shape
({messages, reached_end} not raw activity rows). All 1389/1389 vitest
tests green; tsc --noEmit clean.
## Three weakest spots (hostile-reviewer self-pass)
1. reverseRowChunks groups by Timestamp string equality. If two
distinct rows had the SAME timestamp (legitimately possible at sub-
millisecond granularity), the algorithm would treat them as one
chunk and not reverse them relative to each other. Mitigated:
activity_logs.created_at uses microsecond resolution; concurrent
inserts at exact-same microsecond are vanishingly rare. If a
collision happens, the within-chunk order is whatever the SQL
returned — both rows render at the same timestamp, no user-visible
misordering.
2. The pre-existing TS parser files (historyHydration.ts +
message-parser.ts) stay in tree. historyHydration.ts is now dead
code (no consumers post-migration); deletion is parked as a follow-
up after a one-week observation window confirms no live-message
consumer reaches it.
3. canvas's loadMessagesFromDB returns `resp.messages ?? []`. If the
server were ever to return `null` instead of `[]` (it currently
doesn't — handler defensively coerces nil to []), the nullish coalesce
keeps the canvas from crashing. A stricter wire schema would assert
the never-null invariant; for today's pragmatic safety, the ?? is
enough.
## Security review
- Untrusted input? Same as PR-C — agent JSON parsed defensively in
the messagestore parser. No new exposure.
- Trust boundary? Same. Canvas → /chat-history → wsAuth → messagestore.
- Output sanitization? Plain text + opaque attachment URIs as before.
No security-relevant changes beyond what /chat-history already
exposes via PR-C. Considered, not skipped.
## Versioning / backwards compat
- /activity endpoint unchanged.
- /chat-history endpoint shape unchanged (still {messages, reached_end});
only the wire ORDER within a page changed (newest-first row → oldest-
first row). Canvas is the only consumer in tree; no API consumers
depend on the previous order.
- canvas's loadMessagesFromDB call signature unchanged — internal
refactor.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
github-code-quality bot flagged this as the last unresolved review thread
blocking the merge queue. The function is referenced in comments but
never called from this file (download is dispatched via the lightbox /
AttachmentChip path). Removing the import resolves the bot thread and
clears the staging branch-protection 'all conversations resolved' gate.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>