[Molecule-Platform-Evolvement-Manager]
Closes the fourth and final item from #2071 — but at a slightly
different layer than the issue listed: tests `dragUtils.ts` (the
74-LOC pure-ish geometry helpers) instead of the full 296-LOC
`useDragHandlers` hook. Rationale below.
15 cases across 2 buckets:
**shouldDetach (8):**
- child fully inside parent → false
- child drifted slightly past edge but under DETACH_FRACTION → false
- child past 20% threshold on X → true (un-nest)
- child past 20% threshold on Y → true (un-nest)
- missing child node → true (conservative fallback per source comment)
- missing parent node → true (same)
- measured size absent → falls back to React Flow's 220x120 defaults
(mirrors initial-mount race where measurement hasn't run yet)
- DETACH_FRACTION constant pinned at 0.2 (Miro/tldraw convention)
**clampChildIntoParent (7):**
- child already inside bounds → no-op (no setState — proven by
reference equality on mockState.nodes)
- drifted past top-left → clamps to (0, 0)
- drifted past bottom-right → clamps to (parentW - childW, parentH - childH)
- per-axis independence: X past edge + Y inside → only X clamps
- child not in store → early return, no setState
- child internalNode missing → early return, no setState
- multi-node store: clamping one node MUST NOT touch siblings
## Why dragUtils, not the full useDragHandlers hook
The hook (296 LOC) orchestrates React Flow drag events + Zustand
mutations. Testing it would need heavyweight `useReactFlow` +
internal-node + `setDragOverNode` / `nestNode` / `batchNest` /
`isDescendant` mocks just to drive event handlers — and the
*decisions* the hook makes all delegate to these two helpers:
- `shouldDetach` decides "is this a real un-nest?"
- `clampChildIntoParent` snaps the child back when the user drifted
slightly past the edge without holding Alt/Cmd
Pinning these locks the hot path the user feels. The hook's
remaining surface (modifier-key snapshotting, drop-target
broadcasting, commit-on-release grow pass) is plumbing — worth
testing as a follow-up if it ever regresses, but lower
correctness leverage per LOC of test setup.
## #2071 status after this PR
- [x] useTemplateDeploy (#2121)
- [x] A2AEdge (#2143)
- [x] OrgCancelButton (#2145)
- [x] dragUtils geometry helpers (this PR)
- [ ] Full useDragHandlers hook orchestration — explicit deferral
with rationale above
## Test plan
- [x] All 15 cases pass locally (`vitest run dragUtils.test.ts` — 131ms)
- [x] No changes to the SUT — pure additive coverage
- [ ] CI green
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
[Molecule-Platform-Evolvement-Manager]
Closes the third item from #2071 (Canvas test gaps follow-up). Builds
on the A2AEdge tests in PR #2143.
10 cases across 4 buckets:
**Render (2):**
- Default pill with `Cancel (N)` text + correct ARIA label
- Confirm dialog NOT visible until pill click
**Pill click (3):**
- Click flips to confirming view + stops propagation (so React Flow
doesn't interpret the click as a node selection)
- Confirm copy pluralizes correctly: count=1 → "Delete 1 workspace?",
count>1 → "Delete N workspaces?". Negative assertion guards against
the wrong-form regressing in either direction.
**No / cancel-confirm (1):**
- Click No → returns to pill, no API call, no store mutation
**Yes / cascade-delete (4):**
- Happy path: beginDelete locks the WHOLE subtree (root + children,
NOT unrelated workspace) → api.del("/workspaces/<id>?confirm=true")
→ optimistic store filter strips subtree, keeps unrelated → success
toast → endDelete in finally
- WS-event race: WS_REMOVED handler clears the root mid-flight. The
bail-out branch (`!postDeleteState.nodes.some(n => n.id === rootId)`)
must NOT then run a second optimistic filter. Pre-fix the post-await
subtree walk would miss any orphaned descendants whose parentId got
reparented upward by handleCanvasEvent — pinned now.
- Error path: api.del rejects → endDelete UNDOes the lock + error
toast surfaces the message → subtree STAYS in the store so the user
can retry / interact with the still-deploying nodes
- Non-Error rejection (e.g. string thrown directly): toast surfaces
the canned "Cancel failed" fallback instead of attempting `.message`
## Mocking
- `@/lib/api`, `@/components/Toaster`: simple spy mocks
- `@/store/canvas`: object that satisfies BOTH the selector pattern
(`useCanvasStore(s => s.x)`) AND `getState()` / `setState()` since
the cascade-delete handler walks the subtree via `getState()` and
mutates via `setState()` for the optimistic removal. `vi.hoisted`
preserves referential identity so the mock fns wired into the
state object are observed by every consumer.
## Test plan
- [x] All 10 cases pass locally (`vitest run OrgCancelButton.test.tsx` — ~990ms)
- [x] No changes to the SUT — pure additive coverage
- [ ] CI green
## #2071 progress after this PR
- [x] useTemplateDeploy (PR #2121)
- [x] A2AEdge (PR #2143)
- [x] OrgCancelButton (this PR)
- [ ] useDragHandlers — separate PR
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
When a target workspace's adapter has declared
provides_native_session=True (claude-code SDK's streaming session,
hermes-agent's in-container event log), the SDK owns its own queue/
session state. Adding the platform's a2a_queue layer on top would
double-buffer the same in-flight state — and worse, the platform
queue's drain timing has no relationship to the SDK's actual readiness,
so the queued request might dispatch while the SDK is STILL busy.
Behavior change: in handleA2ADispatchError, when isUpstreamBusyError(err)
fires and the target declared native_session, return 503 + Retry-After
directly without enqueueing. The caller's adapter handles retry on
its own schedule, and the SDK's own queue absorbs the request when
ready. Response body carries native_session=true so callers can
distinguish this from queue-failure 503s.
Observability is preserved: logA2AFailure still runs above; the
broadcaster still fires; the activity_logs row records the busy event
just like the platform-fallback path.
This is the consumer that validates the template-side declarations
already shipped in:
- molecule-ai-workspace-template-claude-code PR #12
- molecule-ai-workspace-template-hermes PR #25
Once those merge + image tags bump, claude-code + hermes workspaces'
busy 503s skip the platform queue end-to-end. End-to-end validation
of capability primitive #5.
Tests (2 new):
- NativeSession_SkipsEnqueue: cache pre-populated, deliberate
sqlmock with NO INSERT INTO a2a_queue expected — implicit
regression cover (sqlmock fails on unexpected queries). Asserts
503 + Retry-After + native_session=true marker in body.
- NoNativeSession_StillEnqueues: negative pin — empty cache, same
busy error → falls through to EnqueueA2A (which fails in this
test, falls through to legacy 503 without native_session marker).
Verification:
- All Go handlers tests pass (2 new + existing)
- go build + go vet clean
See project memory `project_runtime_native_pluggable.md`.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Three small wins from the hermes-agent design survey, bundled because
each is too small for its own PR but they all improve the priority
adapters (claude-code + hermes) immediately.
1. Hermes-style cap on telemetry fields, applied INSIDE report_activity
so every caller benefits without remembering. error_detail capped at
4096 (hermes' value); summary capped at 256 (one-liner ceiling). The
existing call site in tool_delegate_task already truncated error_detail
at 4096, but moving the cap into the helper closes the door on a
future caller pasting a giant traceback. response_text is NOT capped
(it's the agent's user-visible reply; truncating would silently drop
content). Pinned by 4 new tests including a negative-pin that
response_text MUST stay untruncated.
2. Sharper MCP tool descriptions for commit_memory + recall_memory —
hermes' delegate_task description literally says "WAIT for the response"
and delegate_task_async says "Returns immediately." LLMs pick the
right tool variant from descriptions; ambiguity costs accuracy.
- commit_memory now states it APPENDS (each call creates a row, no
overwrite) and that GLOBAL requires tier 0.
- recall_memory now states it's case-insensitive substring search
with no pagination, returns all matches, and that empty-query is
cheap and safer than a narrow keyword.
3. (no code change) Filed task #120 for the bigger user-flow win — a
per-workspace tool enable/disable menu in Canvas Config — and task
#121 for model-string passthrough (depends on #87 universal-runtime
refactor).
Verification:
- 1312/1312 Python pytest pass (was 1308, +4 new)
See task #119 for the architectural follow-ups (event-log layer,
declarative skill compat, observability config block) and project
memory `project_runtime_native_pluggable.md`.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
When an adapter declares provides_native_status_mgmt=True (because its
SDK reports its own ready/degraded/failed state explicitly), the
platform's error-rate-based status inference fights the adapter's own
state machine. This PR gates the inference branches on the capability
flag — adapter-driven transitions become authoritative.
Components:
- registry.go evaluateStatus: gate the two inferred-status branches
(online → degraded when error_rate ≥ 0.5; degraded → online when
error_rate < 0.1 and runtime_state is empty) behind a check of
runtimeOverrides.HasCapability("status_mgmt").
- The wedged-branch (RuntimeState == "wedged" → degraded) is NOT
gated. That path is the adapter's OWN self-report, not platform
inference, and stays active under native_status_mgmt — adapters
can still drive transitions via runtime_state.
Python side: no change. The capability map is already serialized via
RuntimeCapabilities.to_dict() in PR #2137 and sent in the heartbeat's
runtime_metadata block via PR #2139. An adapter setting
RuntimeCapabilities(provides_native_status_mgmt=True) automatically
flows through.
Tests (3 new):
- SkipsDegradeInference: error_rate=0.8 + currentStatus=online + native
flag set → degrade UPDATE does NOT fire (sqlmock fails on unexpected
query, which is the regression cover)
- SkipsRecovery: error_rate=0.05 + currentStatus=degraded + native →
recovery UPDATE does NOT fire
- WedgedStillRespected: runtime_state="wedged" + native → wedged
branch DOES fire (adapter self-report stays active)
Verification:
- All Go handlers tests pass (3 new + existing)
- 1308/1308 Python pytest pass (unchanged — Python side unmodified)
- go build + go vet clean
Stacked on #2140 (already merged via cascade); branch is current with
staging since #2139 and #2140 merged.
See project memory `project_runtime_native_pluggable.md`.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Reviewer bot flagged: import was leftover from earlier scaffolding —
all test fixtures use sys.modules monkey-patching with SimpleNamespace
instead. Drop to unblock merge. Tests still 5/5 pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Reviewer bot flagged: ChatTab.tsx imported extractResponseText but
no longer used it after the loop body moved to historyHydration.ts
(the helper imports it directly). Drop from the named import to
unblock merge. extractFilesFromTask remains used at line 515 for the
WS A2A_RESPONSE handler's reply-files extraction.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
When an adapter declares provides_native_scheduler=True (because its
SDK has built-in cron / Temporal-style workflows), the platform's
polling loop must skip firing schedules for that workspace — otherwise
the schedule fires twice (once natively, once via platform). The
native skip preserves observability (next_run_at still advances, the
schedule row stays in the DB, last_run_at would still update) while
moving the FIRE responsibility to the SDK.
Stacked on PR #2139 (idle_timeout_override end-to-end). The
RuntimeMetadata heartbeat block already carries the capability map;
this PR teaches the platform how to read and act on the scheduler bit.
Components:
- handlers/runtime_overrides.go: extended the cache to store
capability flags alongside idle timeout. Two heartbeat fields are
independent — SetIdleTimeout / SetCapabilities each update one
without stomping the other. Defensive copy on SetCapabilities so
a caller mutating its map after the call doesn't retroactively
change cached declarations. Empty entries dropped to avoid stale
husks.
- handlers/runtime_overrides.go: new HasCapability(workspaceID, name)
+ ProvidesNativeScheduler(workspaceID) — the latter is the
package-level adapter the scheduler imports (avoids a
handlers/scheduler import cycle).
- handlers/registry.go: heartbeat handler now calls SetCapabilities
in addition to SetIdleTimeout.
- scheduler/scheduler.go: NativeSchedulerCheck function-pointer DI
(mirrors the existing QueueDrainFunc pattern). New() leaves the
field nil so existing callers preserve today's "always fire"
behavior. SetNativeSchedulerCheck wires production. tick() drops
workspaces declaring native ownership before goroutine fan-out;
advances next_run_at so we don't tight-loop on the same row.
- cmd/server/main.go: wires handlers.ProvidesNativeScheduler into
the cron scheduler at server boot.
Tests:
Go (7 new):
- SetCapabilitiesAndHas (round-trip)
- per-workspace isolation (ws-a's declaration doesn't leak to ws-b)
- nil/empty map clears (adapter dropping the flag restores fallback)
- SetCapabilities is a defensive copy (caller mutation can't
retroactively flip cached value)
- SetIdleTimeout preserves capabilities and vice-versa (two-field
independence)
- empty entry deleted (no stale husks)
- ProvidesNativeScheduler reads the same singleton heartbeat writes
- SetNativeSchedulerCheck wires the function (scheduler-side)
- nil-check safety contract for tick
Python: no change needed — the heartbeat already serializes the
full capability map via _runtime_metadata_payload (PR #2139). An
adapter setting RuntimeCapabilities(provides_native_scheduler=True)
automatically flows through.
Verification:
- 1308 / 1308 Python pytest pass (unchanged)
- All Go handlers + scheduler tests pass
- go build + go vet clean
See project memory `project_runtime_native_pluggable.md`.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Capability primitive #2 (task #117). The first cross-cutting capability
where the adapter actually displaces platform behavior — claude-code's
streaming session can legitimately go silent for 8+ minutes during
synthesis + slow tool calls; the platform's hardcoded 5min idle timer
in a2a_proxy.go cancels it mid-flight (the bug PR #2128 patched at
the env-var layer). This PR fixes it at the right layer: the adapter
declares "I need 600s" and the platform's dispatch path honors it.
Wire shape (Python → Go):
POST /registry/heartbeat
{
"workspace_id": "...",
...
"runtime_metadata": {
"capabilities": {"heartbeat": false, "scheduler": false, ...},
"idle_timeout_seconds": 600 // optional, omitted = use default
}
}
Default behavior preserved: any adapter that doesn't override
BaseAdapter.idle_timeout_override() (returns None by default) sends
no idle_timeout_seconds field; the Go side falls through to
idleTimeoutDuration (env A2A_IDLE_TIMEOUT_SECONDS, default 5min).
Existing langgraph / crewai / deepagents workspaces are unaffected.
Components:
Python:
- adapter_base.py: idle_timeout_override() method on BaseAdapter
returning None (the platform-default sentinel).
- heartbeat.py: _runtime_metadata_payload() lazy-imports the active
adapter and assembles the capability + override block. Try/except
swallows ANY error so heartbeat never breaks because of capability
discovery — observability outranks capability accuracy.
Go:
- models.HeartbeatPayload.RuntimeMetadata (pointer so absent =
"old runtime, didn't say"; explicit zero-cap = "new runtime,
declared no native ownership").
- handlers.runtimeOverrides: in-memory sync.Map cache keyed by
workspaceID. Populated by the heartbeat handler, consulted on
every dispatchA2A. Reset on platform restart (worst-case 30s of
platform-default behavior — acceptable; nothing about overrides
is correctness-critical).
- a2a_proxy.dispatchA2A: looks up the override before applyIdle
Timeout; falls through to global default when absent.
Tests:
Python (17, all new):
- RuntimeCapabilities dataclass shape (frozen, defaults, wire keys)
- BaseAdapter.capabilities() default + override + sibling isolation
- idle_timeout_override default, positive override, dropped-override
- Heartbeat metadata producer: default adapter emits all-False,
native adapter emits flag + override, missing ADAPTER_MODULE
returns {} (graceful), zero/negative override is omitted from
wire, exception inside adapter swallowed
Go (6, all new):
- SetIdleTimeout + IdleTimeout round-trip
- Zero/negative duration clears the override
- Empty workspace_id ignored
- Replacement (heartbeat overwrites prior value)
- Reset clears entire cache
- Concurrent reads + writes (sync.Map invariant)
Verification:
- 1308 / 1308 workspace pytest pass (was 1300, +8)
- All Go handlers tests pass (6 new + existing)
- go vet clean
See project memory `project_runtime_native_pluggable.md` for the
architecture principle this implements.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
[Molecule-Platform-Evolvement-Manager]
## What this fixes
Closes one of the three skipped tests in workspace_provision_test.go
that #1814's interface refactor enabled but never had a body written:
`TestProvisionWorkspace_NoInternalErrorsInBroadcast`.
The interface blocker (`captureBroadcaster` couldn't substitute for
`*events.Broadcaster`) was already fixed when `events.EventEmitter`
was extracted; this PR ships the test body that the prior refactor
made possible. The test was effectively unverified regression cover
for issue #1206 (internal error leak in WORKSPACE_PROVISION_FAILED
broadcasts) until now.
## What the test pins
Drives the **earliest** failure path in `provisionWorkspace` — the
global-secrets decrypt failure — so the setup needs only:
- one `global_secrets` mock row (with `encryption_version=99` to
force `crypto.DecryptVersioned` to error with a string that
includes the literal version number)
- one `UPDATE workspaces SET status = 'failed'` expectation
- a `captureBroadcaster` (already in the test file) injected via
`NewWorkspaceHandler`
Asserts the captured `WORKSPACE_PROVISION_FAILED` payload:
1. carries the safe canned `"failed to decrypt global secret"` only
2. does NOT contain `"version=99"`, `"platform upgrade required"`,
or the global_secret row's `key` value (`FAKE_KEY`) — the three
leak markers a regression that interpolates `err.Error()` into
the broadcast would surface
## Why not use containsUnsafeString
The test file already has a `containsUnsafeString` helper with
`"secret"` and `"token"` in its prohibition list. Those substrings
match the legitimate redacted message (`"failed to decrypt global
secret"`) — appropriate in user-facing copy, NOT a leak. Using the
broad helper would either fail the test against the source's own
correct message OR require loosening the helper for everyone else.
Per-test explicit leak markers keep the assertion precise without
weakening shared infrastructure.
## What's still skipped (out of scope for this PR)
- `TestProvisionWorkspaceCP_NoInternalErrorsInBroadcast` — same
shape but blocked on a different refactor: `provisionWorkspaceCP`
routes through `*provisioner.CPProvisioner` (concrete pointer,
no interface), so the test would need either an interface
extraction or a real CPProvisioner with a mocked HTTP server.
Larger scope; deferred.
- `TestResolveAndStage_NoInternalErrorsInHTTPErr` — different
blocker (`mockPluginsSources` vs `*plugins.Registry` type
mismatch). Needs a SourceResolver-side interface refactor.
Both still carry their `t.Skip` notes documenting the remaining
work.
## Test plan
- [x] New test passes
- [x] Full handlers package suite still green (`go test ./internal/handlers/`)
- [x] No changes to production code — pure test addition
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Foundation primitive for the native+pluggable runtime principle (task
#117, blocks #87). Lets each adapter declare which cross-cutting
capabilities it owns natively (heartbeat, scheduler, durable session,
status mgmt, retry, activity decoration, channel dispatch) versus
delegates to the platform's fallback implementation.
Pure additive: every existing adapter inherits BaseAdapter.capabilities()
which returns RuntimeCapabilities() — every flag False — so today's
"platform owns everything" behavior is preserved exactly. Subsequent
PRs land platform-side consumers (idle-timeout override, scheduler
skip, status-transition hook, etc.) one capability at a time.
Why a frozen dataclass instead of class attributes: capabilities are
declared at class-load time and read by the platform on every heartbeat.
A mutable value would let a runtime change capabilities mid-flight,
creating impossible-to-debug state where the platform's idea of who-
owns-heartbeat drifts from the adapter's actual code.
Why a `to_dict()` with explicit short keys: the Go side will read these
from the heartbeat payload by string key. The dict's wire names are
pinned independently of Python field names so a Python-side rename
doesn't silently break the Go consumer (test pins this).
Tests (9 new):
- is a frozen dataclass (mutation rejected)
- all 7 default flags are False (load-bearing — flipping any default
silently moves ownership for langgraph/crewai/deepagents)
- to_dict() keys are stable wire names (Go contract)
- BaseAdapter.capabilities() default returns all-False
- subclass override mechanism works
- sibling adapters' defaults aren't affected by an override
Verification:
- 1300/1300 workspace pytest pass (was 1291, +9)
- Zero behavior change for any existing code path
See project memory `project_runtime_native_pluggable.md`.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Reviewer follow-up to PR #2134 (Optional finding). The history loader
walked text on the user branch but never extracted file parts — so a
chat reload after a session where the user dragged in a file rendered
the text bubble but lost the download chip. Symmetric to the agent
branch which already handles this via extractFilesFromTask.
Wire shape from ChatTab's outbound POST:
request_body = {params: {message: {parts: [
{kind: "text", text: "..."},
{kind: "file", file: {uri, name, mimeType?, size?}}
]}}}
extractFilesFromTask walks `task.parts`, so we feed it `params.message`
(the inner object that has the parts array). Three new tests:
- hydrates file attachments from request_body
- emits an attachments-only bubble when text is empty (drag-drop
without caption — pre-fix the empty userText short-circuited and
the row was dropped entirely)
- internal-self predicate suppresses the row even with attachments
(defence-in-depth for future internal triggers)
Stacked on #2134; this branch's parent commit is its tip.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
User pushed back: the timestamp bug should have been caught by E2E.
Right — my earlier coverage tested the server contract (notify endpoint,
WS broadcast filter) but never the chat-history HYDRATION path. Without
a unit test that froze the wall clock and asserted timestamps came from
created_at, a future refactor could re-introduce the same bug.
This commit:
1. Extracts the per-row → ChatMessage[] mapping out of the closure
inside loadMessagesFromDB into chat/historyHydration.ts. Pure
function, no React dependency, easy to test.
2. Adds 12 vitest cases in __tests__/historyHydration.test.ts covering:
- Timestamp regression (3 tests, with system time frozen to 2030 so
a regression starts producing "2030-…" timestamps and the assertion
fails unmistakably). The third test mirrors the user's screenshot:
two rows with distinct created_at must produce distinct timestamps.
- User-message extraction (text, internal-self filter, null body)
- Agent-message extraction (text, error→system role, file attachments,
null body, body with neither text nor files)
- End-to-end: a single row with both request and response emits
two messages with the same timestamp (the canonical canvas-source
row pattern)
3. The new file-attachment test caught a SECOND latent bug — the helper
was passing `response_body.result ?? response_body` to extractFiles
FromTask, which passes the STRING "<text>" for the notify-with-
attachments shape `{result: "<text>", parts: [...]}` and silently
returns []. So a chat reload after an agent attached a file would
lose the chips. Fixed by only unwrapping `result` when it's an
object (the task-shape) and falling through to response_body
otherwise (the notify shape).
ChatTab now imports the helper and the loop body becomes one line:
`messages.push(...activityRowToMessages(a, isInternalSelfMessage))`.
Verification:
- 12/12 historyHydration tests pass
- 1072/1072 full canvas vitest pass (was 1060 before, +12)
- tsc --noEmit clean
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
User flagged that all historical user bubbles render with the same
"now" clock after a chat reload — both messages in the screenshot
showed 9:01:58 PM despite being sent hours apart.
ChatTab.tsx:142 minted user messages with createMessage(...) which
calls new Date().toISOString() — fine for a freshly-typed message,
wrong for hydrated history. Every reload re-stamped all user bubbles
to the render moment, collapsing the visible chronology. The agent
path on line 157 already overrides with a.created_at; mirror that.
One-line fix (spread + override timestamp) plus a comment explaining
why the override is load-bearing so the next refactor doesn't drop it.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
User flagged a leftover "Notify E2E" workspace on the canvas — caused by
an earlier debug run getting SIGPIPE'd before the EXIT trap could fire.
Add an idempotent pre-sweep at the top of the script so the next run
cleans up any prior leftover with the same name. Belt-and-suspenders
with the existing trap; both have to fail for a leak to persist.
Verified:
- Normal run: 14/14 pass, 0 leftovers
- SIGTERM mid-setup: trap fires, 0 leftovers
- Re-run after interruption: pre-sweep + new run both clean
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Local dev mode bypassed workspace auth, so my first push passed locally
but failed CI with HTTP 401 on /notify. The wsAuth-grouped endpoints
(notify, activity, chat/uploads) require Authorization: Bearer in any
non-dev environment. Mint the token via the existing e2e_mint_test_token
helper and thread it through every authenticated curl. Same pattern as
test_api.sh.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
User asked to "keep optimizing and comprehensive e2e testings to prove all
works as expected" for the communication path. Adds three layers of coverage
for PR #2130 (agent → user file attachments via send_message_to_user) since
that path has the most user-visible blast radius:
1. Shell E2E (tests/e2e/test_notify_attachments_e2e.sh) — pure platform test,
no workspace container needed. 14 assertions covering: notify text-only
round-trip, notify-with-attachments persists parts[].kind=file in the
shape extractFilesFromTask reads, per-element validation rejects empty
uri/name (regression for the missing gin `dive` bug), and a real
/chat/uploads → /notify URI round-trip when a container is up.
2. Canvas AGENT_MESSAGE handler tests (canvas-events.test.ts +5) — pin the
WebSocket-side filtering that drops malformed attachments, allows
attachments-only bubbles, ignores non-array payloads, and no-ops on
pure-empty events.
3. Persisted response_body shape test (message-parser.test.ts +1) — pins
the {result, parts} contract the chat history loader hydrates on
reload, so refreshing after an agent attachment restores both caption
and download chips.
Also wires the new shell E2E into e2e-api.yml so the contract regresses
in CI rather than only in manual runs.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
User flagged two paper cuts in Agent Comms after the grouping PR:
"Delegating to f6f3a023-ab3c-4a69-b101-976028a4a7ec" reads as gibberish
because it's a UUID, and the chat is "one way" with only outbound bubbles
even though peers are clearly responding.
Both fixes are in toCommMessage's delegation branch:
1. Pull text from the actual payload, not the platform's audit-log summary.
- delegate row → request_body.task (the task text the agent sent).
Fallback when missing: "Delegating to <resolved-peer-name>" — never
the raw UUID.
- delegate_result row → response_body.response_preview / .text (the
peer's actual reply). Fallback paths render human-readable status
for queued / failed cases ("Queued — Peer Agent is busy on a prior
task...") instead of platform jargon.
2. delegate_result rows render flow="in" — even though source_id=us
(the platform writes the row on our side), the conversational
direction is peer → us. The chat now shows alternating bubbles
(out: "Build me 10 landing pages" → in: "Done — ZIP at /tmp/...")
instead of one-sided "→ To X" wall.
The WS push handler in this same file now populates request_body /
response_body from the DELEGATION_SENT / DELEGATION_COMPLETE event
payloads (task_preview, response_preview), so live-pushed bubbles use
the same text-extraction path as the GET-on-mount.
Tests:
- 4 new in toCommMessage's delegation branch:
- delegate row prefers request_body.task over summary
- delegate row falls back to name-resolved label when task missing
- delegate_result row is INBOUND (flow="in")
- delegate_result queued shows human-readable wait message including
the resolved peer name
- Replaces the previous "delegate row maps text from summary" tests
which encoded the (now-undesirable) platform-summary-as-text behavior.
- All 15 tests pass.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The chronological-only view was a noodle once Director + N peers
exchange more than a few rounds. New layout: a sub-tab bar at the
top of the panel, with "All" pinned leftmost and one tab per peer
(name + count). Selecting a peer filters the thread to that one
DD↔X conversation; "All" preserves the previous chronological view
as the default.
Tab ordering follows Slack/Linear DM-list convention: most-recent
activity descending, so active conversations rise to the top
without the user scrolling. Counts in parens match Slack's unread
hint pattern (no separate read/unread state — the count is total
in this conversation, computed from the same in-memory message
list the panel already maintains).
Pure-helper extraction: peer-summary derivation lives in
`buildPeerSummary(messages)` so the sort + count logic is unit-
testable without rendering the panel. 5 new tests cover: count
aggregation, most-recent-first ordering, lastTs as max-not-last,
empty input, name-stability when the same peerId carries different
names across messages.
Keyboard: ArrowLeft/Right cycle peer tabs (matches the existing
My Chat / Agent Comms tab pattern in ChatTab). Auto-prune: if the
selected peer has zero messages after a setMessages update (rare,
e.g. dedupe drops the last bubble), fall back to "All" so the
viewer doesn't see an empty thread.
Frontend-only — no platform / runtime / DB changes. The existing
`peerId` / `peerName` fields on CommMessage already carry every
piece of data the new UI needs.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two Critical bugs caught in code review of the agent→user attachments PR:
1. **Empty-URI attachments slipped past validation.** Gin's
go-playground/validator does NOT iterate slice elements without
`dive` — verified zero `dive` usage anywhere in workspace-server —
so the inner `binding:"required"` tags on NotifyAttachment.URI/Name
were never enforced. `attachments: [{"uri":"","name":""}]` would
pass validation, broadcast empty-URI chips that render blank in
canvas, AND persist them in activity_logs for every page reload to
re-render. Added explicit per-element validation in Notify (returns
400 with `attachment[i]: uri and name are required`) plus
defence-in-depth in the canvas filter (rejects empty strings, not
just non-strings).
3-case regression test pins the rejection.
2. **Hardcoded application/octet-stream stripped real mime types.**
`_upload_chat_files` always passed octet-stream as the multipart
Content-Type. chat_files.go:Upload reads `fh.Header.Get("Content-Type")`
FIRST and only falls back to extension-sniffing when the header is
empty, so every agent-attached file lost its real type forever —
broke the canvas's MIME-based icon/preview logic. Now sniff via
`mimetypes.guess_type(path)` and only fall back to octet-stream
when sniffing returns None.
Plus three Required nits:
- `sqlmockArgMatcher` was misleading — the closure always returned
true after capture, identical to `sqlmock.AnyArg()` semantics, but
named like a custom matcher. Renamed to `sqlmockCaptureArg(*string)`
so the intent (capture for post-call inspection, not validate via
driver-callback) is unambiguous.
- Test asserted notify call by `await_args_list[1]` index — fragile
to any future _upload_chat_files refactor that adds a pre-flight
POST. Now filter call list by URL suffix `/notify` and assert
exactly one match.
- Added `TestNotify_RejectsAttachmentWithEmptyURIOrName` (3 cases)
covering empty-uri, empty-name, both-empty so the Critical fix
stays defended.
Deferred to follow-up:
- ORDER BY tiebreaker for same-millisecond notifies — pre-existing
risk, not regression.
- Streaming multipart upload — bounded by the platform's 50MB total
cap so RAM ceiling is fixed; switch to streaming if cap rises.
- Symlink rejection — agent UID can already read whatever its
filesystem perms allow via the shell tool; rejecting symlinks
doesn't materially shrink the attack surface.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the gap where the Director would say "ZIP is ready at /tmp/foo.zip"
in plain text instead of attaching a download chip — the runtime literally
had no API for outbound file attachments. The canvas + platform's
chat-uploads infrastructure already supported the inbound (user → agent)
direction (commit 94d9331c); this PR wires the outbound side.
End-to-end shape:
agent: send_message_to_user("Done!", attachments=["/tmp/build.zip"])
↓ runtime
POST /workspaces/<self>/chat/uploads (multipart)
↓ platform
/workspace/.molecule/chat-uploads/<uuid>-build.zip
→ returns {uri: workspace:/...build.zip, name, mimeType, size}
↓ runtime
POST /workspaces/<self>/notify
{message: "Done!", attachments: [{uri, name, mimeType, size}]}
↓ platform
Broadcasts AGENT_MESSAGE with attachments + persists to activity_logs
with response_body = {result: "Done!", parts: [{kind:file, file:{...}}]}
↓ canvas
WS push: canvas-events.ts adds attachments to agentMessages queue
Reload: ChatTab.loadMessagesFromDB → extractFilesFromTask sees parts[]
Either path → ChatTab renders download chip via existing path
Files changed:
workspace-server/internal/handlers/activity.go
- NotifyAttachment struct {URI, Name, MimeType, Size}
- Notify body accepts attachments[], broadcasts in payload,
persists as response_body.parts[].kind="file"
canvas/src/store/canvas-events.ts
- AGENT_MESSAGE handler reads payload.attachments, type-validates
each entry, attaches to agentMessages queue
- Skips empty events (was: skipped only when content empty)
workspace/a2a_tools.py
- tool_send_message_to_user(message, attachments=[paths])
- New _upload_chat_files helper: opens each path, multipart POSTs
to /chat/uploads, returns the platform's metadata
- Fail-fast on missing file / upload error — never sends a notify
with a half-rendered attachment chip
workspace/a2a_mcp_server.py
- inputSchema declares attachments param so claude-code SDK
surfaces it to the model
- Defensive filter on the dispatch path (drops non-string entries
if the model sends a malformed payload)
Tests:
- 4 new Python: success path, missing file, upload 5xx, no-attach
backwards compat
- 1 new Go: Notify-with-attachments persists parts[] in
response_body so chat reload reconstructs the chip
Why /tmp paths work even though they're outside the canvas's allowed
roots: the runtime tool reads the bytes locally and re-uploads through
/chat/uploads, which lands the file under /workspace (an allowed root).
The agent can specify any readable path.
Does NOT include: agent → agent file transfer. Different design problem
(cross-workspace download auth: peer would need a credential to call
sender's /chat/download). Tracked as a follow-up under task #114.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
[Molecule-Platform-Evolvement-Manager]
## What was breaking
All three staging e2e workflows' "Teardown safety net" steps
filtered candidate slugs by `f'e2e-...-{today}-...'` where `today`
was computed at safety-net-step time via `datetime.date.today()`.
When a run crossed midnight UTC (start before 00:00, end after),
`today` became the NEXT day, but the slug it created carried the
PRIOR day's date. The filter never matched its own slug → leak.
## Today's incident
E2E Staging Canvas run [24970092066](
https://github.com/Molecule-AI/molecule-core/actions/runs/24970092066):
- started 2026-04-26 23:45:59Z
- created slug `e2e-canvas-20260426-1u8nz3` at 23:59Z
- ended 2026-04-27 00:12:47Z (failure)
- safety-net step ran with `today=20260427`
- filter `e2e-canvas-20260427-` did not match `...20260426-1u8nz3`
- tenant + child workspace EC2 both stayed up
Confirmed via CP staging logs: no DELETE for `1u8nz3` ever issued.
The Playwright globalTeardown didn't fire (test crashed mid-run);
the workflow safety-net was the last line and it missed.
## Fix
All three workflows now sweep BOTH today AND yesterday's UTC dates,
so a run that crosses midnight still matches its own slug:
```python
today = datetime.date.today()
yesterday = today - datetime.timedelta(days=1)
dates = (today.strftime('%Y%m%d'), yesterday.strftime('%Y%m%d'))
prefixes = tuple(f'e2e-canvas-{d}-' for d in dates) # (canvas variant)
```
Per-run-id scoping (saas + canary) is preserved — the prior-day
prefix still includes the run_id, so cross-midnight runs only sweep
their own slugs, not other in-flight runs from yesterday.
## Why two-day window vs. arbitrary lookback
A run can't legitimately last more than 24h on GitHub-hosted
runners (workflow `timeout-minutes` caps; canary=25, e2e-saas=45,
canvas=30). Two-day window is enough to cover any cross-midnight
run without widening the cross-run-cleanup blast radius further.
The `sweep-stale-e2e-orgs.yml` cron (with its 120-min age threshold)
remains the catch-all for anything older that drifts through.
## Test plan
- [x] Manual logic simulation: post-midnight slug matches yesterday's
prefix; same-day still matches; 2-days-ago does NOT match;
production tenant never matches
- [x] All three workflow YAMLs syntactically valid
- [ ] Next cross-midnight run cleans up its own slug
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
[Molecule-Platform-Evolvement-Manager]
Addresses github-code-quality finding on PR #2064:
> Comparison between inconvertible types
> Variable 'info' cannot be of type null, but it is compared to
> an expression of type null.
By line 75, `info` has been narrowed to non-null via the
`if (!info) return null;` guard at line 56 — so `open={info !== null}`
always evaluates to `true`. Switch to JSX shorthand `open` for
clarity and to silence the static check.
Behaviorally identical; the modal still opens whenever the parent
renders this component (which only happens with non-null info).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>