Reviewer bot flagged: ChatTab.tsx imported extractResponseText but
no longer used it after the loop body moved to historyHydration.ts
(the helper imports it directly). Drop from the named import to
unblock merge. extractFilesFromTask remains used at line 515 for the
WS A2A_RESPONSE handler's reply-files extraction.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Reviewer follow-up to PR #2134 (Optional finding). The history loader
walked text on the user branch but never extracted file parts — so a
chat reload after a session where the user dragged in a file rendered
the text bubble but lost the download chip. Symmetric to the agent
branch which already handles this via extractFilesFromTask.
Wire shape from ChatTab's outbound POST:
request_body = {params: {message: {parts: [
{kind: "text", text: "..."},
{kind: "file", file: {uri, name, mimeType?, size?}}
]}}}
extractFilesFromTask walks `task.parts`, so we feed it `params.message`
(the inner object that has the parts array). Three new tests:
- hydrates file attachments from request_body
- emits an attachments-only bubble when text is empty (drag-drop
without caption — pre-fix the empty userText short-circuited and
the row was dropped entirely)
- internal-self predicate suppresses the row even with attachments
(defence-in-depth for future internal triggers)
Stacked on #2134; this branch's parent commit is its tip.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
User pushed back: the timestamp bug should have been caught by E2E.
Right — my earlier coverage tested the server contract (notify endpoint,
WS broadcast filter) but never the chat-history HYDRATION path. Without
a unit test that froze the wall clock and asserted timestamps came from
created_at, a future refactor could re-introduce the same bug.
This commit:
1. Extracts the per-row → ChatMessage[] mapping out of the closure
inside loadMessagesFromDB into chat/historyHydration.ts. Pure
function, no React dependency, easy to test.
2. Adds 12 vitest cases in __tests__/historyHydration.test.ts covering:
- Timestamp regression (3 tests, with system time frozen to 2030 so
a regression starts producing "2030-…" timestamps and the assertion
fails unmistakably). The third test mirrors the user's screenshot:
two rows with distinct created_at must produce distinct timestamps.
- User-message extraction (text, internal-self filter, null body)
- Agent-message extraction (text, error→system role, file attachments,
null body, body with neither text nor files)
- End-to-end: a single row with both request and response emits
two messages with the same timestamp (the canonical canvas-source
row pattern)
3. The new file-attachment test caught a SECOND latent bug — the helper
was passing `response_body.result ?? response_body` to extractFiles
FromTask, which passes the STRING "<text>" for the notify-with-
attachments shape `{result: "<text>", parts: [...]}` and silently
returns []. So a chat reload after an agent attached a file would
lose the chips. Fixed by only unwrapping `result` when it's an
object (the task-shape) and falling through to response_body
otherwise (the notify shape).
ChatTab now imports the helper and the loop body becomes one line:
`messages.push(...activityRowToMessages(a, isInternalSelfMessage))`.
Verification:
- 12/12 historyHydration tests pass
- 1072/1072 full canvas vitest pass (was 1060 before, +12)
- tsc --noEmit clean
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
User flagged that all historical user bubbles render with the same
"now" clock after a chat reload — both messages in the screenshot
showed 9:01:58 PM despite being sent hours apart.
ChatTab.tsx:142 minted user messages with createMessage(...) which
calls new Date().toISOString() — fine for a freshly-typed message,
wrong for hydrated history. Every reload re-stamped all user bubbles
to the render moment, collapsing the visible chronology. The agent
path on line 157 already overrides with a.created_at; mirror that.
One-line fix (spread + override timestamp) plus a comment explaining
why the override is load-bearing so the next refactor doesn't drop it.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
User asked to "keep optimizing and comprehensive e2e testings to prove all
works as expected" for the communication path. Adds three layers of coverage
for PR #2130 (agent → user file attachments via send_message_to_user) since
that path has the most user-visible blast radius:
1. Shell E2E (tests/e2e/test_notify_attachments_e2e.sh) — pure platform test,
no workspace container needed. 14 assertions covering: notify text-only
round-trip, notify-with-attachments persists parts[].kind=file in the
shape extractFilesFromTask reads, per-element validation rejects empty
uri/name (regression for the missing gin `dive` bug), and a real
/chat/uploads → /notify URI round-trip when a container is up.
2. Canvas AGENT_MESSAGE handler tests (canvas-events.test.ts +5) — pin the
WebSocket-side filtering that drops malformed attachments, allows
attachments-only bubbles, ignores non-array payloads, and no-ops on
pure-empty events.
3. Persisted response_body shape test (message-parser.test.ts +1) — pins
the {result, parts} contract the chat history loader hydrates on
reload, so refreshing after an agent attachment restores both caption
and download chips.
Also wires the new shell E2E into e2e-api.yml so the contract regresses
in CI rather than only in manual runs.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
User flagged two paper cuts in Agent Comms after the grouping PR:
"Delegating to f6f3a023-ab3c-4a69-b101-976028a4a7ec" reads as gibberish
because it's a UUID, and the chat is "one way" with only outbound bubbles
even though peers are clearly responding.
Both fixes are in toCommMessage's delegation branch:
1. Pull text from the actual payload, not the platform's audit-log summary.
- delegate row → request_body.task (the task text the agent sent).
Fallback when missing: "Delegating to <resolved-peer-name>" — never
the raw UUID.
- delegate_result row → response_body.response_preview / .text (the
peer's actual reply). Fallback paths render human-readable status
for queued / failed cases ("Queued — Peer Agent is busy on a prior
task...") instead of platform jargon.
2. delegate_result rows render flow="in" — even though source_id=us
(the platform writes the row on our side), the conversational
direction is peer → us. The chat now shows alternating bubbles
(out: "Build me 10 landing pages" → in: "Done — ZIP at /tmp/...")
instead of one-sided "→ To X" wall.
The WS push handler in this same file now populates request_body /
response_body from the DELEGATION_SENT / DELEGATION_COMPLETE event
payloads (task_preview, response_preview), so live-pushed bubbles use
the same text-extraction path as the GET-on-mount.
Tests:
- 4 new in toCommMessage's delegation branch:
- delegate row prefers request_body.task over summary
- delegate row falls back to name-resolved label when task missing
- delegate_result row is INBOUND (flow="in")
- delegate_result queued shows human-readable wait message including
the resolved peer name
- Replaces the previous "delegate row maps text from summary" tests
which encoded the (now-undesirable) platform-summary-as-text behavior.
- All 15 tests pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The chronological-only view was a noodle once Director + N peers
exchange more than a few rounds. New layout: a sub-tab bar at the
top of the panel, with "All" pinned leftmost and one tab per peer
(name + count). Selecting a peer filters the thread to that one
DD↔X conversation; "All" preserves the previous chronological view
as the default.
Tab ordering follows Slack/Linear DM-list convention: most-recent
activity descending, so active conversations rise to the top
without the user scrolling. Counts in parens match Slack's unread
hint pattern (no separate read/unread state — the count is total
in this conversation, computed from the same in-memory message
list the panel already maintains).
Pure-helper extraction: peer-summary derivation lives in
`buildPeerSummary(messages)` so the sort + count logic is unit-
testable without rendering the panel. 5 new tests cover: count
aggregation, most-recent-first ordering, lastTs as max-not-last,
empty input, name-stability when the same peerId carries different
names across messages.
Keyboard: ArrowLeft/Right cycle peer tabs (matches the existing
My Chat / Agent Comms tab pattern in ChatTab). Auto-prune: if the
selected peer has zero messages after a setMessages update (rare,
e.g. dedupe drops the last bubble), fall back to "All" so the
viewer doesn't see an empty thread.
Frontend-only — no platform / runtime / DB changes. The existing
`peerId` / `peerName` fields on CommMessage already carry every
piece of data the new UI needs.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two Critical bugs caught in code review of the agent→user attachments PR:
1. **Empty-URI attachments slipped past validation.** Gin's
go-playground/validator does NOT iterate slice elements without
`dive` — verified zero `dive` usage anywhere in workspace-server —
so the inner `binding:"required"` tags on NotifyAttachment.URI/Name
were never enforced. `attachments: [{"uri":"","name":""}]` would
pass validation, broadcast empty-URI chips that render blank in
canvas, AND persist them in activity_logs for every page reload to
re-render. Added explicit per-element validation in Notify (returns
400 with `attachment[i]: uri and name are required`) plus
defence-in-depth in the canvas filter (rejects empty strings, not
just non-strings).
3-case regression test pins the rejection.
2. **Hardcoded application/octet-stream stripped real mime types.**
`_upload_chat_files` always passed octet-stream as the multipart
Content-Type. chat_files.go:Upload reads `fh.Header.Get("Content-Type")`
FIRST and only falls back to extension-sniffing when the header is
empty, so every agent-attached file lost its real type forever —
broke the canvas's MIME-based icon/preview logic. Now sniff via
`mimetypes.guess_type(path)` and only fall back to octet-stream
when sniffing returns None.
Plus three Required nits:
- `sqlmockArgMatcher` was misleading — the closure always returned
true after capture, identical to `sqlmock.AnyArg()` semantics, but
named like a custom matcher. Renamed to `sqlmockCaptureArg(*string)`
so the intent (capture for post-call inspection, not validate via
driver-callback) is unambiguous.
- Test asserted notify call by `await_args_list[1]` index — fragile
to any future _upload_chat_files refactor that adds a pre-flight
POST. Now filter call list by URL suffix `/notify` and assert
exactly one match.
- Added `TestNotify_RejectsAttachmentWithEmptyURIOrName` (3 cases)
covering empty-uri, empty-name, both-empty so the Critical fix
stays defended.
Deferred to follow-up:
- ORDER BY tiebreaker for same-millisecond notifies — pre-existing
risk, not regression.
- Streaming multipart upload — bounded by the platform's 50MB total
cap so RAM ceiling is fixed; switch to streaming if cap rises.
- Symlink rejection — agent UID can already read whatever its
filesystem perms allow via the shell tool; rejecting symlinks
doesn't materially shrink the attack surface.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the gap where the Director would say "ZIP is ready at /tmp/foo.zip"
in plain text instead of attaching a download chip — the runtime literally
had no API for outbound file attachments. The canvas + platform's
chat-uploads infrastructure already supported the inbound (user → agent)
direction (commit 94d9331c); this PR wires the outbound side.
End-to-end shape:
agent: send_message_to_user("Done!", attachments=["/tmp/build.zip"])
↓ runtime
POST /workspaces/<self>/chat/uploads (multipart)
↓ platform
/workspace/.molecule/chat-uploads/<uuid>-build.zip
→ returns {uri: workspace:/...build.zip, name, mimeType, size}
↓ runtime
POST /workspaces/<self>/notify
{message: "Done!", attachments: [{uri, name, mimeType, size}]}
↓ platform
Broadcasts AGENT_MESSAGE with attachments + persists to activity_logs
with response_body = {result: "Done!", parts: [{kind:file, file:{...}}]}
↓ canvas
WS push: canvas-events.ts adds attachments to agentMessages queue
Reload: ChatTab.loadMessagesFromDB → extractFilesFromTask sees parts[]
Either path → ChatTab renders download chip via existing path
Files changed:
workspace-server/internal/handlers/activity.go
- NotifyAttachment struct {URI, Name, MimeType, Size}
- Notify body accepts attachments[], broadcasts in payload,
persists as response_body.parts[].kind="file"
canvas/src/store/canvas-events.ts
- AGENT_MESSAGE handler reads payload.attachments, type-validates
each entry, attaches to agentMessages queue
- Skips empty events (was: skipped only when content empty)
workspace/a2a_tools.py
- tool_send_message_to_user(message, attachments=[paths])
- New _upload_chat_files helper: opens each path, multipart POSTs
to /chat/uploads, returns the platform's metadata
- Fail-fast on missing file / upload error — never sends a notify
with a half-rendered attachment chip
workspace/a2a_mcp_server.py
- inputSchema declares attachments param so claude-code SDK
surfaces it to the model
- Defensive filter on the dispatch path (drops non-string entries
if the model sends a malformed payload)
Tests:
- 4 new Python: success path, missing file, upload 5xx, no-attach
backwards compat
- 1 new Go: Notify-with-attachments persists parts[] in
response_body so chat reload reconstructs the chip
Why /tmp paths work even though they're outside the canvas's allowed
roots: the runtime tool reads the bytes locally and re-uploads through
/chat/uploads, which lands the file under /workspace (an allowed root).
The agent can specify any readable path.
Does NOT include: agent → agent file transfer. Different design problem
(cross-workspace download auth: peer would need a credential to call
sender's /chat/download). Tracked as a follow-up under task #114.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
[Molecule-Platform-Evolvement-Manager]
Addresses github-code-quality finding on PR #2064:
> Comparison between inconvertible types
> Variable 'info' cannot be of type null, but it is compared to
> an expression of type null.
By line 75, `info` has been narrowed to non-null via the
`if (!info) return null;` guard at line 56 — so `open={info !== null}`
always evaluates to `true`. Switch to JSX shorthand `open` for
clarity and to silence the static check.
Behaviorally identical; the modal still opens whenever the parent
renders this component (which only happens with non-null info).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Critical follow-up to PR #2126's review. Two real bugs:
1. **Runtime QUEUED never resolved.** Platform's drain stitch updates
the platform's delegate_result row when a queued delegation finally
completes, but never pushes back to the runtime. The LLM polling
check_delegation_status saw status="queued" forever — combined with
the new docstring guidance ("queued → wait, peer will reply"), the
model would wait indefinitely on a state that never resolves.
Strictly worse than pre-PR behavior where it would have at least
bypassed.
2. **Live updates dead code.** delegation.go writes activity rows by
direct INSERT INTO activity_logs, bypassing the LogActivity helper
that fires ACTIVITY_LOGGED. Adding "delegation" to the canvas's
ACTIVITY_LOGGED filter (PR #2126 first cut) was inert — initial
GET worked, live updates did not.
Fix:
(1) Runtime side, workspace/builtin_tools/delegation.py:
- New `_refresh_queued_from_platform(task_id)` async helper that
pulls /workspaces/<self>/delegations and finds the platform-side
delegate_result row for our task_id.
- check_delegation_status calls _refresh when local status is
QUEUED, so the LLM's poll itself drives state convergence.
- Best-effort: GET failure leaves local state untouched, next
poll retries.
- Docstring updated to reflect the actual behavior ("polls
transparently — keep polling and you'll see the flip").
- 4 new tests cover: QUEUED → completed via refresh; QUEUED →
failed via refresh; refresh keeps QUEUED when platform hasn't
resolved; refresh swallows network errors safely.
(2) Canvas side, AgentCommsPanel.tsx WS push handler:
- Listens for DELEGATION_SENT / DELEGATION_STATUS / DELEGATION_COMPLETE
/ DELEGATION_FAILED in addition to ACTIVITY_LOGGED.
- Each event's payload synthesized into an ActivityEntry shape
so toCommMessage's existing delegation branch maps it. Status
derived: STATUS uses payload.status, COMPLETE → "completed",
FAILED → "failed", SENT → "pending".
- The ACTIVITY_LOGGED branch keeps the "delegation" type accepted
as a no-op-today / future-proof path: if delegation handlers
are ever refactored to call LogActivity, this lights up
automatically without another canvas change.
Doesn't change: the docstring guidance ("queued → wait, don't bypass")
is now actually load-bearing because the refresh path will deliver
the eventual outcome. Without the refresh, the guidance was a trap.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two bugs that compounded into the "Director does the work itself" UX:
1. workspace/builtin_tools/delegation.py: _execute_delegation only
handled HTTP 200 in the response branch. When the peer's a2a-proxy
returned HTTP 202 + {queued: true} (single-SDK-session bottleneck
on the peer), the loop fell through. Two iterations later the
`if "error" in result` check tried to access an unbound `result`,
the goroutine ended quietly, and the delegation stayed at FAILED
with error="None". The LLM checking status saw "failed" + the
platform's "Delegation queued — target at capacity" log line in
chat context, concluded the peer was permanently unavailable, and
bypassed delegation to do the work itself.
Fix: explicit 202+queued branch. Adds DelegationStatus.QUEUED,
marks the local delegation as QUEUED, mirrors to the platform,
and returns cleanly without retrying. The retry loop is for
transient transport errors — queueing is a real ack, not a failure
to retry against (retrying would just re-queue the same task).
check_delegation_status docstring extended with explicit per-status
guidance: pending/in_progress → wait, queued → wait (peer busy on
prior task, reply WILL arrive), completed → use result, failed →
real error in error field; only fall back on failed, never queued.
2. canvas/src/components/tabs/chat/AgentCommsPanel.tsx: filter dropped
every delegation row because it whitelisted only a2a_send /
a2a_receive. activity_type='delegation' rows (written by the
platform's /delegate handler with method='delegate' or
'delegate_result') never reached toCommMessage. User saw "No
agent-to-agent communications yet" while 6+ delegations existed
in the DB.
Fix: include "delegation" in the both the initial filter and the
WS push filter, plus a delegation branch in toCommMessage that
maps the row as outbound (always — platform proxies on our behalf)
and uses summary as the primary text source.
Tests:
- 3 new Python tests cover the 202+queued path: status becomes
QUEUED not FAILED; no retry on queued (counted by URL match
against the A2A target since the mock is shared across all
AsyncClient calls); bare 202 without {queued:true} still
falls through to the existing retry-then-FAILED path.
- 3 new TS tests cover the delegation mapper: 'delegate' row
maps as outbound to target with summary text; queued
'delegate_result' preserves status='queued' (load-bearing for
the LLM's wait-vs-bypass decision); missing target_id returns
null instead of rendering a ghost.
Does NOT solve: the underlying single-SDK-session bottleneck that
causes peers to queue in the first place. Tracked as task #102
(parallel SDK sessions per workspace) — real architectural work.
This PR makes the runtime handle the queueing correctly so the LLM
doesn't bail out, and makes the delegations visible in Agent Comms
so operators can see what's happening.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
[Molecule-Platform-Evolvement-Manager]
Closes the first item from #2071 (Canvas test gaps follow-up):
adds behavioural coverage for the shared template-deploy hook that
both TemplatePalette (sidebar) and EmptyState (welcome grid) drive.
10 cases across 4 buckets:
**Happy path (4):**
- preflight ok → POST /workspaces → onDeployed fires with new id
- caller-supplied canvasCoords flows into the POST body
- default coords fall in [100,500) × [100,400) when canvasCoords omitted
- template.runtime is preferred over the resolveRuntime fallback
(locks the deduped-fallback table contract added in #2061)
**Preflight failures (2):**
- network throw sets error AND clears `deploying` (regression test
for the "stranded button" bug called out in the SUT's inline
comment — drop the try block and you'll fail this test)
- not-ok-with-missing-keys opens the modal without firing POST
**Modal lifecycle (2):**
- 'keys added' click retries POST without re-running preflight
(verifies the executeDeploy / deploy split — preflight call count
stays at 1, POST count goes to 1)
- 'cancel' click closes modal without firing POST
**POST failures (2):**
- Error rejection surfaces the message
- non-Error rejection surfaces the "Deploy failed" fallback
Mocks `@/lib/api`, `@/lib/deploy-preflight`, and `@/components/MissingKeysModal`
(stand-in component exposes the two callbacks as test-id buttons —
the real radix modal is irrelevant to this hook's behavior). Test
file follows the `vi.hoisted` + import-after-mocks pattern from
`canvas/src/app/__tests__/orgs-page.test.tsx`.
## Test plan
- [x] All 10 cases pass locally (`vitest run useTemplateDeploy.test.tsx`)
- [x] No changes to the SUT — pure additive coverage
- [ ] CI green
Follow-ups for the rest of #2071 (separate PRs):
- A2AEdge rendering + click-to-select-source
- OrgCancelButton cancel flow + optimistic state
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
[Molecule-Platform-Evolvement-Manager]
## What was breaking
Two distinct failure modes in `.github/workflows/secret-scan.yml`,
both visible after PR #2115 / #2117 hit the merge queue:
1. **`merge_group` events**: the script reads `github.event.before /
after` to determine BASE/HEAD. Those properties only exist on
`push` events. On `merge_group` events both came back empty, the
script fell through to "no BASE → scan entire tree" mode, and
false-positived on `canvas/src/lib/validation/__tests__/secret-formats.test.ts`
which contains a `ghp_xxxx…` literal as a masking-function fixture.
(Run 24966890424 — exit 1, "matched: ghp_[A-Za-z0-9]{36,}".)
2. **`push` events with shallow clone**: `fetch-depth: 2` doesn't
always cover BASE across true merge commits. When BASE is in the
payload but absent from the local object DB, `git diff` errors
out with `fatal: bad object <sha>` and the job exits 128.
(Run 24966796278 — push at 20:53Z merging #2115.)
## Fixes
- Add a dedicated fetch step for `merge_group.base_sha` (mirrors
the existing pull_request base fetch) so the diff base is in the
object DB before `git diff` runs.
- Move event-specific SHAs into a step `env:` block so the script
uses a clean `case` over `${{ github.event_name }}` instead of
a single `if pull_request / else push` that left merge_group on
the empty branch.
- Add an on-demand fetch for the push-event BASE when it isn't in
the shallow clone, plus a `git cat-file -e` guard before the
diff so we fall through cleanly to the "scan entire tree" path
if the fetch fails (correct, just slower) instead of exiting 128.
## Defense-in-depth
`secret-formats.test.ts` had two literal continuous-string fixtures
(`'ghp_xxxx…'`, `'github_pat_xxxx…'`). The ghp_ one matched the
secret-scan regex. Switched both to the `'prefix_' + 'x'.repeat(N)`
pattern already used elsewhere in the same file — runtime value is
the same, but the literal source text no longer matches the regex
even if the BASE detection ever falls back to tree-scan mode again.
## Test plan
- [x] No remaining regex matches in the secret-formats.test.ts source
- [x] YAML structure preserved
- [ ] CI passes on this PR's pull_request scan (was already passing)
- [ ] CI passes on this PR's merge_group scan (the new path)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Follow-up to #2110 (which generalised pruneStaleKeys to Map<string, T>).
Identified by the simplify reviewer on that PR as the only other
in-tree caller of the same shape: `for (const id of map.keys()) { if
(!liveIds.has(id)) map.delete(id); }`.
Net: -3 lines, one less hand-rolled GC loop. No behaviour change —
the helper does exactly what the inline block did.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Simplify pass on top of #2069 fix:
- Export FALLBACK_POLL_MS from canvas/src/store/socket.ts and import
it as TOMBSTONE_TTL_MS in deleteTombstones.ts. Single source of
truth — tuning one without the other would silently re-open the
hydrate-races-delete window. Required-fix per simplify reviewer.
- Compress deleteTombstones.ts docstring from 30 lines to 10 — keep
the "what + why module-level"; drop the long-form problem
description (issue #2069 carries it).
- Compress canvas.ts call-site comments at removeSubtree (4 lines →
2) and hydrate (2 lines → 2 but tighter).
- Don't reassign the workspaces parameter inside hydrate — use a
const `live` and thread it through the two downstream calls
(computeAutoLayout, buildNodesAndEdges). Same effect, no lint
smell.
- Trim the canvas.test.ts integration-test preamble.
No behaviour change.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes#2069. removeSubtree dropped a parent + descendants locally
after DELETE returned 200, but a GET /workspaces request that was
IN-FLIGHT before the DELETE completed could land AFTER and hydrate
the store with a stale snapshot — re-introducing the deleted nodes
on the canvas until the next 10s fallback poll corrected it.
New module canvas/src/store/deleteTombstones.ts holds a transient
process-lifetime Map<id, deletedAt>. removeSubtree calls
markDeleted(removedIds); hydrate calls wasRecentlyDeleted(id) to
filter the incoming workspaces. TTL is 10s — matches the WS-fallback
poll cadence so a single round-trip is covered, after which a
legitimately re-imported id flows through normally.
GC happens lazily at every read AND at write time so the map stays
bounded — no separate timer / interval / unmount plumbing.
Tests:
- canvas/src/store/__tests__/deleteTombstones.test.ts: 7 cases
covering immediate flag, never-marked, TTL boundary (9999ms vs
10001ms), GC-on-read, GC-on-write, re-mark resets timestamp,
iterable input.
- canvas/src/store/__tests__/canvas.test.ts: end-to-end "hydrate
cannot resurrect ids that removeSubtree just dropped (#2069)"
exercises the full chain at the store level.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Simplify pass on top of #2070 fix:
- Rename pruneStaleSubtreeIds → pruneStaleKeys, generalize to
Map<string, T> so the same shape can absorb other keyed-by-node-id
caches (ProvisioningTimeout.tsx tracking map is the obvious next
caller — left as a follow-up to keep this PR scoped).
- Trim the helper docstring to remove implementation-detail rot
(O(map_size), cadence claims). The ref-block comment carries the
rationale where it actually matters (at the call site).
- Add identity-preservation test: survivors must keep their original
Set reference. Guards against a future "rebuild instead of delete"
regression that would silently invalidate downstream === checks.
No behaviour change.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes#2070. The Map<rootId, Set<nodeId>> in useCanvasViewport.ts
accumulated entries indefinitely — adds on every successful auto-fit,
never deletes when a root left state.nodes (cascade delete or manual
remove). Operationally invisible until thousands of imports, but the
fix is cheap.
Adds pruneStaleSubtreeIds(map, liveNodeIds) — a pure helper exported
alongside the existing shouldFitGrowing helper, called at the top of
runFit before any read or write to the map. Bounds the map to "roots
present right now" instead of "every root ever auto-fitted in this
session." O(map_size) per fit; runs only at user-driven cadence.
Tests in __tests__/useCanvasViewport.test.ts cover the four cases:
delete-some / no-op / clear-all / never-add.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Three files conflicted with staging changes that landed while this PR
sat open. Resolved each by combining both intents (not picking one side):
- a2a_proxy.go: keep the branch's idle-timeout signature
(workspaceID parameter + comment) AND apply staging's #1483 SSRF
defense-in-depth check at the top of dispatchA2A. Type-assert
h.broadcaster (now an EventEmitter interface per staging) back to
*Broadcaster for applyIdleTimeout's SubscribeSSE call; falls through
to no-op when the assertion fails (test-mock case).
- a2a_proxy_test.go: keep both new test suites — branch's
TestApplyIdleTimeout_* (3 cases for the idle-timeout helper) AND
staging's TestDispatchA2A_RejectsUnsafeURL (#1483 regression). Updated
the staging test's dispatchA2A call to pass the workspaceID arg
introduced by the branch's signature change.
- workspace_crud.go: combine both Delete-cleanup intents:
* Branch's cleanupCtx detachment (WithoutCancel + 30s) so canvas
hang-up doesn't cancel mid-Docker-call (the container-leak fix)
* Branch's stopAndRemove helper that skips RemoveVolume when Stop
fails (orphan sweeper handles)
* Staging's #1843 stopErrs aggregation so Stop failures bubble up
as 500 to the client (the EC2 orphan-instance prevention)
Both concerns satisfied: cleanup runs to completion past canvas
hangup AND failed Stop calls surface to caller.
Build clean, all platform tests pass.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Closes the canvas-side loop on #2054. Phases 1+2 plumbed
provision_timeout_ms from template manifest → workspace API →
canvas socket → node-data → ProvisioningTimeout resolver. The
template-hermes manifest declares provision_timeout_seconds: 720
(filed as a separate template-repo PR). With that flow live, the
canvas-side hardcoded RUNTIME_PROFILES.hermes entry is redundant.
Removed:
- RUNTIME_PROFILES.hermes (was 720000ms hardcoded in canvas/src/lib/runtimeProfiles.ts)
Doc updates:
- RUNTIME_PROFILES jsdoc explains the map is now empty by design —
new runtimes that need a non-default cold-boot threshold should
declare runtime_config.provision_timeout_seconds in their template
manifest, NOT add an entry here.
Tests updated (3):
- "returns hermes override when runtime = hermes" → "hermes returns
default — value moved server-side post-#2054 phase 3". Asserts
RUNTIME_PROFILES.hermes is undefined.
- The two server-override tests now compare against
DEFAULT_RUNTIME_PROFILE since hermes no longer has a profile entry.
19/19 pass locally. The end-state for hermes:
workspace-server reads template manifest at request time →
workspace API includes provision_timeout_ms: 720000 →
canvas hydrate populates node.data.provisionTimeoutMs →
ProvisioningTimeout resolver picks it up via overrides.
Same effective threshold (720s), now declarative and one-edit-point
per runtime.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
simplify-review note: the |/,-delimited node string is brittle if a
future string-typed field is added without sanitization. Document
which fields are user-typed (name — already sanitized) vs primitive
(id is UUID, runtime is a slug, provisionTimeoutMs is numeric) so
the next field-add doesn't accidentally introduce an injection
vector for the splitter.
Skipped (false-positive review finding): the agent flagged the
prop > runtime-profile order as inconsistent with the docstring,
but the docstring explicitly lists the prop at #2 (between node and
runtime-profile) — matches both the implementation AND the original
behavior pre-#2054 (the prop was 'timeoutMs ?? runtime-profile').
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Phase 1 of moving runtime UX knobs server-side. Builds the canvas
foundation: a workspace can carry its own provision_timeout_ms
(sourced server-side from a template manifest in a follow-up PR),
and ProvisioningTimeout's resolver respects it per-node.
Today the resolver had Props-level timeoutMs that applied to ALL
nodes — fine for tests but wrong for production where one batch
could mix runtimes (hermes 12-min cold boot alongside docker 2-min).
The runtime profile fallback already handles per-runtime defaults;
this PR adds the per-WORKSPACE override layer above that.
Resolution priority (most specific wins):
1. node.provisionTimeoutMs — server-declared per-workspace
override (this PR's new field)
2. timeoutMs prop — single-threshold test override
3. runtime profile in @/lib/runtimeProfiles
4. DEFAULT_RUNTIME_PROFILE
Changes:
- WorkspaceData (socket): add optional provision_timeout_ms
- WorkspaceNodeData: add optional provisionTimeoutMs
- canvas-topology hydrate: thread the field through to node.data
- ProvisioningTimeout: extend the serialized-string node iteration
to carry provisionTimeoutMs (4-field positional split); pass as
the second arg to provisionTimeoutForRuntime
- 3 new tests in ProvisioningTimeout.test.tsx covering hydrate
threading, null fall-through, and resolver priority
Phase 2 (separate PR, blocked on workspace-server template-config
loader): workspace-server reads provision_timeout_seconds from
template config.yaml at provision time, includes
provision_timeout_ms in the workspace API/socket response. Phase 3
(template-repo PR): template-hermes config.yaml declares
provision_timeout_seconds: 720; canvas RUNTIME_PROFILES.hermes
becomes redundant and can be removed.
19/19 tests pass (3 new + 16 existing).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
User reported the canvas threw a generic "API GET /workspaces: 500
{auth check failed}" error when local Postgres + Redis were both
down. Two problems:
1. The error code (500) and message ("auth check failed") said
nothing useful. The actual condition was "platform can't reach
its datastore to validate your token" — a Service Unavailable
class, not Internal Server Error.
2. The canvas had no way to distinguish infra-down from a real
auth bug, so it rendered the raw API string in the same
generic-error overlay it uses for everything.
Fix in two layers:
Server (wsauth_middleware.go):
- New abortAuthLookupError helper centralises all three sites
that previously returned `500 {"error":"auth check failed"}`
when HasAnyLiveTokenGlobal or orgtoken.Validate hit a DB error.
- Now returns 503 + structured body
`{"error": "...", "code": "platform_unavailable"}`. 503 is
the correct semantic ("retry shortly, infra is unavailable")
and the code field is the contract the canvas reads.
- Body deliberately excludes the underlying DB error string —
production hostnames / connection-string fragments must not
leak into a user-visible error toast.
Canvas (api.ts):
- New PlatformUnavailableError class. api.ts inspects 503
responses for the platform_unavailable code and throws the
typed error instead of the generic "API GET /…: 503 …"
message. Generic 503s (upstream-busy, etc.) keep the legacy
path so existing busy-retry UX isn't disrupted.
Canvas (page.tsx):
- New PlatformDownDiagnostic component renders when the
initial hydration catches PlatformUnavailableError.
Surfaces the actual condition with operator-actionable
copy ("brew services start postgresql@14 / redis") +
pointer to the platform log + a Reload button.
Tests:
- Go: TestAdminAuth_DatastoreError_Returns503PlatformUnavailable
pins the response shape (status, code field, no DB-error leak)
- Canvas: 5 tests for PlatformUnavailableError classification —
typed throw on 503+code match, generic-Error fallback for
503-without-code (upstream busy), 500 stays generic, non-JSON
body falls back to generic.
1015 canvas tests + full Go middleware suite pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The actual cause-fix for the staging-tabs E2E saga (#2073/#2074/#2075).
Old behaviour: ANY 401 from any fetch on a SaaS tenant subdomain
called redirectToLogin → window.location.href = AuthKit. This is
wrong. Plenty of 401s don't mean "session is dead":
- workspace-scoped endpoints (/workspaces/:id/peers, /plugins)
require a workspace-scoped token, not the tenant admin bearer
- resource-permission mismatches (user has tenant access but not
this specific workspace)
- misconfigured proxies returning 401 spuriously
A single transient one of those yanked authenticated users back to
AuthKit. Same bug yanked the staging-tabs E2E off the tenant origin
mid-test for 6+ hours tonight, leading to the cascade of test-side
mocks (#2073/#2074/#2075) that worked around the symptom without
fixing the cause.
This PR fixes it at the source. The new logic:
- 401 on /cp/auth/* path → that IS the canonical session-dead
signal → redirect (unchanged)
- 401 on any other path with slug present → probe /cp/auth/me:
probe 401 → session genuinely dead → redirect
probe 200 → session fine, endpoint refused this token →
throw a real Error, caller renders error state
probe network err → assume session-fine (conservative) →
throw real Error
- slug empty (localhost / LAN / reserved subdomain) → throw
without redirect (unchanged)
The probe adds one extra fetch on a 401, only when slug is set
and the path isn't already auth-scoped. That's rare and
worthwhile — a transient probe round-trip is cheap; an unwanted
auth redirect is a UX disaster.
Tests:
- api-401.test.ts rewritten with the full matrix:
* /cp/auth/me 401 → redirect (no probe, that IS the signal)
* non-auth 401 + probe 401 → redirect
* non-auth 401 + probe 200 → throw, no redirect ← the fix
* non-auth 401 + probe network err → throw, no redirect
* empty slug paths (localhost/LAN/reserved) → throw, no probe
- 43 tests in canvas/src/lib/__tests__/api*.test.ts all pass
- tsc clean
The staging-tabs E2E spec's universal-401 route handler stays as
defense-in-depth (silences resource-load console noise + guards
against panels without try/catch), but the comment now describes
its role honestly: api.ts is the primary fix, the route is the
safety net.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Bundle review of pieces 1/2/3 surfaced two critical issues plus a
handful of required + optional fixes. All addressed.
Critical:
1. Migration 043 was missing 'paused' and 'hibernated' from the
workspace_status enum. Both are real production statuses written
by workspace_restart.go (lines 283 and 406), introduced by
migration 029_workspace_hibernation. The original `USING
status::workspace_status` cast would have errored mid-transaction
on any production DB containing those values. Added both. Also
added `SET LOCAL lock_timeout = '5s'` so the migration aborts
instead of stalling the workspace fleet behind a slow SELECT.
2. The chat activity-feed window kept only 8 lines, and a single
multi-tool turn (Read 5 files + Grep + Bash + Edit + delegate)
easily flushed older context before the user could read it.
Extracted appendActivityLine to chat/activityLog.ts with a
20-line window AND consecutive-duplicate collapse (same tool
on the same target twice in a row is noise, not new progress).
5 unit tests pin the behavior.
Required:
3. The SDK wedge flag was sticky-only — a single transient
Control-request-timeout from a flaky network blip locked the
workspace into degraded for the whole process lifetime, even
when the next query() would have succeeded. Added
_clear_sdk_wedge_on_success(), called from _run_query's success
path. The next heartbeat after a working query reports
runtime_state empty and the platform recovers the workspace to
online without a manual restart. New regression test.
4. _report_tool_use now sets target_id = WORKSPACE_ID for self-
actions, matching the convention other self-logged activity
rows use. DB consumers joining on target_id see a well-defined
value instead of NULL.
Optional taken:
5. Tightened _WEDGE_ERROR_PATTERNS from "control request timeout"
to "control request timeout: initialize" — suffix-anchored so a
future SDK error on an in-flight tool-call control message
doesn't get misclassified as the unrecoverable post-init wedge.
6. Dropped the redundant "context canceled" substring fallback in
isUpstreamBusyError. errors.Is(err, context.Canceled) is the
typed check; the substring would also match healthy client-side
aborts, which we don't want classified as upstream-busy.
Verified: 1010 canvas tests + 64 Python tests + full Go suite pass;
migration applies cleanly on dev DB with all 8 enum values; reverse
migration restores TEXT.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two halves of the same UX win — the user wants to see what Claude is
doing while a chat reply is in flight instead of staring at "0s" for
minutes.
Workspace side (claude_sdk_executor.py):
- The executor's _run_query message loop already iterated the SDK
stream for AssistantMessage.TextBlock content. Now also detects
ToolUseBlock / ServerToolUseBlock entries (by class name, since
the conftest stub doesn't define them) and fires-and-forgets a
POST /workspaces/:id/activity row of type agent_log per tool use.
- _summarize_tool_use maps the common tools (Read, Write, Edit,
Bash, Glob, Grep, WebFetch, WebSearch, Task, TodoWrite) to a
one-line summary with the file path / pattern / command, falling
back to "🛠 <tool>(…)" for anything else. Truncated at 200 chars.
- Posts directly to /workspaces/:id/activity rather than going
through a2a_tools.report_activity, which would also push a
/registry/heartbeat current_task and double-log as a TASK_UPDATED
line in the same chat feed.
- All failures swallowed silently — telemetry must not break
the conversation.
Canvas side (ChatTab.tsx):
- The existing ACTIVITY_LOGGED handler streams a2a_send /
a2a_receive / task_update events into a sliding-window
activityLog state. Two issues fixed:
1. No `msg.workspace_id === workspaceId` filter — a sibling
workspace's a2a_send was leaking into the wrong chat
panel as "→ Delegating to X...". Added an early return.
2. No agent_log render branch. Added one that renders the
summary verbatim (the workspace already prefixed its
own emoji icon, so no double-icon).
- Existing 8-line sliding window keeps the UI scoped; older
progress lines naturally roll off as new ones arrive.
Result: when DD is delegating to Visual Designer + reading
config files + running Bash to lint, the spinner area shows:
📄 Read /configs/system-prompt.md
⚡ Bash: pnpm test
→ Delegating to Visual Designer...
← Visual Designer responded (47s)
instead of bare "0s · Processing with Claude Code..." for minutes.
63 Python tests + 58 canvas chat tests pass; tsc clean.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Three required fixes from the bundle review of 391e1872:
1. workspace/a2a_client.py: substring `type_name in msg` could miss
the diagnostic prefix when an exception's message embedded a
different class name mid-string (e.g. `OSError("see ConnectionError
below")` → printed as plain msg, type lost). Switched to a
prefix-anchored check (`msg.startswith(f"{type_name}:")` etc.) so
the type label is always added when not already at the start of
the message.
2. workspace/a2a_tools.py: `activity_logs.error_detail` is unbounded
TEXT on the platform (handlers/activity.go does not validate
length). A buggy or hostile peer could stream arbitrarily large
error messages into the caller's activity log. Cap at 4096 chars
at the producer — comfortably above any real exception traceback,
well below an obvious-DoS threshold.
3. New regression test for JSON-RPC `code=0` — pins the
`code is not None` semantics so the code is preserved in the
detail rather than collapsing into the no-code path. Code=0 is
not valid per the spec, but a malformed peer can still emit it
and we want it visible for diagnosis.
Plus one optional taken: extracted the A2A-error → hint mapping into
canvas/src/components/tabs/chat/a2aErrorHint.ts. The two prior copies
(AgentCommsPanel.inferCauseHint + ActivityTab.inferA2AErrorHint) had
already drifted — Activity tab gained `not found`/`offline` cases the
chat panel never picked up, AgentCommsPanel handled empty-input
explicitly while Activity didn't. The shared module is the merged
superset, with 10 unit tests pinning each named pattern + the
"most specific first" ordering (Claude SDK wedge wins over generic
timeout).
Skipped (per analysis):
- Unicode-naive 120-char slice — Python str[:N] slices on code
points, not bytes. Safe.
- Nested [A2A_ERROR] confusion — non-issue per reviewer; outer
prefix winning still produces a structured render.
- MessagePreview + JsonBlock dual render on errors — intentional
drilldown; raw JSON is below the fold for operators who need it.
- console.warn dedup — refetches don't happen per-event so spam
risk is low.
- str(data)[:200] materialization — A2A response bodies aren't
typically MB-sized.
Verified: 1005 canvas tests pass (10 new hint tests); 10 Python
send_a2a_message tests pass (1 new for code=0); tsc clean.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Symptom: Activity tab and Agent Comms surfaced bare "[A2A_ERROR] "
(prefix + nothing) for failed delegations. Operator had no signal
to act on — no exception type, no target, no hint about what went
wrong, no next step. Fix is in three layers.
1. workspace/a2a_client.py — every error path now produces an
actionable detail string:
- except branch: some httpx exceptions (RemoteProtocolError,
ConnectionReset variants) stringify to "". Pre-fix the catch
was `f"{_A2A_ERROR_PREFIX}{e}"` → bare prefix. Now falls back
to `<TypeName> (no message — likely connection reset or silent
timeout)` and always appends `[target=<url>]` for traceability
in chained delegations.
- JSON-RPC error branch: previously dropped error.code on the
floor and printed "unknown" when message was missing. Now
surfaces both, including the well-defined "JSON-RPC error
with no message (code=N)" path.
- "neither result nor error" branch: pre-fix returned
str(payload) which the canvas rendered as a successful
response block. Now tagged as A2A_ERROR with a payload
snippet so downstream UI routes through the error path.
2. workspace/a2a_tools.py — tool_delegate_task now passes
error_detail (the stripped error message) through to the
activity-log POST. The platform's activity_logs.error_detail
column is the canvas's red error chip source; populating it
makes the failure visible in the row header without the user
having to expand into raw response_body JSON. The summary line
also gets a 120-char prefix of the cause so the collapsed row
reads "React Engineer failed: ConnectionResetError: ... [target=...]"
instead of "React Engineer failed".
3. canvas/src/components/tabs/ActivityTab.tsx — MessagePreview
now detects [A2A_ERROR]-prefixed bodies and renders a
structured error block (red chip, stripped detail, cause hint)
instead of the previous gray text-block that showed the literal
"[A2A_ERROR]" string. inferA2AErrorHint mirrors the patterns
from AgentCommsPanel.inferCauseHint so the same symptom reads
the same way in both surfaces (Claude SDK init wedge → restart
workspace; timeout → busy/stuck; connection-reset → transient
blip then check logs).
Tests: 9 send_a2a_message tests pass (including a new regression
test for the empty-stringifying-exception case that the user
reported); 995 canvas tests pass; tsc clean.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Reported symptom: canvas edges show "1 call · just now" between two
agents, but the Agent Comms tab for the source workspace renders
"No agent-to-agent communications yet" — even though
GET /workspaces/<id>/activity?source=agent&limit=50 returns a2a_send
+ a2a_receive rows.
Confirmed via curl that the API does return the rows the panel
should map. The panel's load handler was the suspect, but it had:
.catch(() => setLoading(false))
which swallowed every failure path — network errors, JSON parse,
ANY throw inside the .then body — without leaving a single trace in
the console. The panel just sat on its empty state and gave the user
zero signal to act on. (And by extension, gave us nothing to debug
remotely either.)
Two changes:
1. Wrap the per-row `toCommMessage` call in a try/catch so one
malformed activity row (unexpected request_body shape, etc.)
doesn't throw out of the for-loop and skip the
setMessages(msgs) line. Previously the panel would silently
drop the entire batch when ANY row failed to parse.
2. Replace the bare `.catch(() => setLoading(false))` with a
logging variant. Now a future "panel stuck empty" report comes
with `AgentCommsPanel: load activity failed <err>` or
`AgentCommsPanel: failed to map activity row {...}` in the
console — diagnosable instead of opaque.
Behavior on the happy path is unchanged (5 existing tests still
pass; tsc clean). This is purely defensive: it makes the failure
path visible so the next stuck-empty report can be root-caused
instead of guessed at.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Bundle-level review caught an implicit coupling in useCanvasViewport
between two distinct fit effects:
- settle fit: 1200ms one-shot when provisioning transitions to zero
(deploy just finished — settle on the whole org once)
- tracking fit: 500ms debounced per molecule:fit-deploying-org event
(track the org's bounds as children land during the deploy)
Both effects shared a single autoFitTimerRef, so each one's
clearTimeout call could silently cancel the other's pending fit.
Today's behavior happened to land in the right order out of luck —
the tracking handler fires per-arrival during the deploy, then the
settle effect arms after the last child completes. But nothing in
the code enforces that ordering; a future refactor that, say,
fires the settle effect from the same event sequence as the
tracking timer (mid-deploy status flicker) would silently drop the
settle fit because the tracking timer's clearTimeout ran last.
Splitting into settleFitTimerRef + trackingFitTimerRef makes the
two effects fully independent. Cleanup clears both. Tests still pass
(995/995); the refactor is mechanical.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Three review-driven fixes plus regression coverage for the bugs
landed in 176b703d / deedb5ef:
1. clearTimeout the prior reload handle before scheduling a new one in
both installFromSource and handleUninstall. Two installs within the
PLUGIN_RELOAD_DELAY_MS window (15s) used to queue two
loadInstalled() calls; the unmount cleanup only cleared the latest
handle, and the second reconciliation could overwrite a still-
correct optimistic state with a stale snapshot mid-restart.
2. Drop `setInstalledLoaded(true)` from the optimistic block. That
flag's contract is "the initial GET has succeeded at least once" —
it gates the auto-expand-registry effect. A user installing a
custom-source plugin BEFORE the initial fetch returned would flip
the gate prematurely, the auto-expand would never fire, and a
followup loadInstalled racing with the optimistic write could
overwrite our entry with [] mid-restart.
3. Don't force `supported_on_runtime: true` on the optimistic record.
The "inert on this runtime" badge in the row renders on the value
`=== false`. Forcing true would hide the badge for 15s if the user
installed a plugin that doesn't actually support the workspace's
runtime; the real value lands at refetch. Leaving the field
undefined keeps the badge neutral until reconciliation arrives.
Plus a behavioral test (SkillsTab.install.test.tsx) that asserts:
- the install POST URL contains the workspaceId (not "undefined")
- the row's "Install" button is replaced by the green "Installed"
tag synchronously after POST resolves, without advancing any
timer — locks in the optimistic-update contract so a future
refactor can't silently regress it.
995 canvas tests pass (2 new); tsc clean.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
After clicking Install, the button reverted from "Installing..." → "Install"
the moment the POST returned, then sat there for ~15s before the green
"Installed" tag appeared. The 15s gap is PLUGIN_RELOAD_DELAY_MS — we
delay the GET /workspaces/:id/plugins refetch to wait for the workspace
to restart (the listing handler returns [] while the container is
restarting because findRunningContainer comes up empty).
Uninstall already does optimistic local-state mutation (line 244 prior
to this commit) so the green tag → install button transition is
instant. Install was the inconsistent half — push the registry entry
into `installed` immediately after POST returns 200 and let the
delayed refetch reconcile.
The optimistic record uses the registry entry's metadata (name,
version, description, tags, runtimes, skills) and sets
supported_on_runtime=true. If reconciliation later disagrees (server
filter, install actually failed at the runtime layer), the refetch
overwrites the local record. Worst case is a brief 15s window where
we show "Installed" for a plugin that won't load — same window the
user previously experienced as "stuck on Install button" — but flipped
to the correct expected state.
Custom-source installs (github://, etc.) don't have a registry entry
to use, so they keep the old behavior of waiting for the refetch. Most
users install from the registry list in the UI.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
SkillsTab read \`data.id\` from its props and used the value to build
two API URLs:
POST /workspaces/\${data.id}/plugins
DELETE /workspaces/\${data.id}/plugins/\${pluginName}
But \`data\` is the React Flow node.data blob (WorkspaceNodeData) —
the workspace id lives on \`node.id\`, NOT on \`node.data\`. WorkspaceNodeData
extends \`Record<string, unknown>\`, which makes \`data.id\` type-check
silently as \`unknown\` instead of erroring. So every install/uninstall
hit \`/workspaces/undefined/plugins\`, the server's not-found path
returned 503 "workspace container not running" (misleading — the real
issue was the bogus URL), and the user got a confusing toast.
Every other tab in SidePanel takes \`workspaceId={selectedNodeId}\` as
an explicit prop. SkillsTab was the lone outlier, presumably because
"data has all the fields I need" is the obvious-looking shortcut that
TypeScript can't catch through the index-signature interface.
Fix: make \`workspaceId\` an explicit prop on SkillsTab, drop the
\`data.id\` reads, thread the prop from SidePanel like the other tabs.
Test fixture updated to pass it.
Verified: 993 canvas tests pass; tsc clean.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Independent code review surfaced two required documentation fixes and
one growth-correctness gap. All addressed here.
Auto-fit gate (useCanvasViewport):
The previous "subtree-grew-by-count" check missed the delete-then-add
case: subtree of 6 → delete one → 5 → a different child arrives → 6
again. A length-only comparison reads no growth and the fit is
skipped, leaving the new node off-screen. Switched to an id-set
membership snapshot so any brand-new id forces the fit even when the
count is unchanged.
The gate logic is now extracted as a pure exported function
`shouldFitGrowing(currentIds, prevIds, userPannedAt, lastAutoFitAt)`
so the regression-prone decision can be unit-tested in isolation
without standing up React Flow + DOM event refs. 8 cases cover:
first-fit, empty-prior, brand-new id, status-update with user pan,
no-pan-ever, pan-before-last-fit, delete-then-add same length, and
shrink-only with user pan.
Parser parity (dotenv.go + next.config.ts):
Existing-env semantics were undocumented in both parsers. Both now
explicitly note that an explicitly-set empty string (`KEY=` from the
parent shell) counts as "set" — the file value does NOT backfill —
matching the Go (os.LookupEnv) and Node (`process.env[k] !==
undefined`) primitives.
`export ` prefix uses a literal space; `export\tFOO=bar` is
intentionally rejected. Added the same comment in both parsers
to lock in this parity invariant since the commit message claims
"if one parser changes, the other has to."
Skipped (per analysis):
- Drag-pan respect for left-click drag-pan during deploy. The
growth-check safety net means any pan gets overridden on the
next arrival anyway, which is the desired behavior for the
"watch the org deploy" use case. After deploy completes, no
more fit-deploying-org events fire so drag-pan works freely.
- Map cleanup for lastFitSubtreeIdsRef. Per-tab session, UUID
keys, tiny entries — not worth the cleanup hook.
993 canvas tests pass (8 new); Go dotenv tests pass; tsc clean.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>