feat(canvas+platform): chat attachments, model selection, deploy/delete UX

Session's accumulated UX work across frontend and platform. Reviewable
in four logical sections — diff is large but internally cohesive
(each section fixes a gap the next one depends on).

## Chat attachments — user ↔ agent file round trip

- New POST /workspaces/:id/chat/uploads (multipart, 50 MB total /
  25 MB per file, UUID-prefixed storage under
  /workspace/.molecule/chat-uploads/).
- New GET /workspaces/:id/chat/download with RFC 6266 filename
  escaping and binary-safe io.CopyN streaming.
- Canvas: drag-and-drop onto chat pane, pending-file pills,
  per-message attachment chips with fetch+blob download (anchor
  navigation can't carry auth headers).
- A2A flow carries FileParts end-to-end; hermes template executor
  now consumes attachments via platform helpers.

## Platform attachment helpers (workspace/executor_helpers.py)

Every runtime's executor routes through the same helpers so future
runtimes inherit attachment awareness for free:
- extract_attached_files — resolve workspace:/file:///bare URIs,
  reject traversal, skip non-existent.
- build_user_content_with_files — manifest for non-image files,
  multi-modal list (text + image_url) for images. Respects
  MOLECULE_DISABLE_IMAGE_INLINING for providers whose vision
  adapter hangs on base64 payloads (MiniMax M2.7).
- collect_outbound_files — scans agent reply for /workspace/...
  paths, stages each into chat-uploads/ (download endpoint
  whitelist), emits as FileParts in the A2A response.
- ensure_workspace_writable — called at molecule-runtime startup
  so non-root agents can write /workspace without each template
  having to chmod in its Dockerfile.

Hermes template executor + langgraph (a2a_executor.py) + claude-code
(claude_sdk_executor.py) all adopt the helpers.

## Model selection & related platform fixes

- PUT /workspaces/:id/model — was 404'ing, so canvas "Save"
  silently lost the model choice. Stores into workspace_secrets
  (MODEL_PROVIDER), auto-restarts via RestartByID.
- applyRuntimeModelEnv falls back to envVars["MODEL_PROVIDER"]
  so Restart propagates the stored model to HERMES_DEFAULT_MODEL
  without needing the caller to rehydrate payload.Model.
- ConfigTab Tier dropdown now reads from workspaces row, not the
  (stale) config.yaml — fixes "badge shows T3, form shows T2".

## ChatTab & WebSocket UX fixes

- Send button no longer locks after a dropped TASK_COMPLETE —
  `sending` no longer initializes from data.currentTask.
- A2A POST timeout 15 s → 120 s. LLM turns routinely exceed 15 s;
  the previous default aborted fetches while the server was still
  replying, producing "agent may be unreachable" on success.
- socket.ts: disposed flag + reconnectTimer cancellation + handler
  detachment fix zombie-WebSocket in React StrictMode.
- Hermes Config tab: RUNTIMES_WITH_OWN_CONFIG drops 'hermes' —
  the adaptor's purpose IS the form, banner was contradictory.
- workspace_provision.go auto-recovery: try <runtime>-default AND
  bare <runtime> for template path (hermes lives at the bare name).

## Org deploy/delete animation (theme-ready CSS)

- styles/theme-tokens.css — design tokens (durations, easings,
  colors). Light theme overrides by setting only the deltas.
- styles/org-deploy.css — animation classes + keyframes, every
  value references a token. prefers-reduced-motion respected.
- Canvas projects node.draggable=false onto locked workspaces
  (deploying children AND actively-deleting ids) — RF's
  authoritative drag lock; useDragHandlers retains a belt-and-
  braces check.
- Organ cancel button (red pulse pill on root during deploy)
  cascades via existing DELETE /workspaces/:id?confirm=true.
- Auto fit-view after each arrival, debounced 500 ms so rapid
  sibling arrivals coalesce into one fit (previous per-event
  fit made the viewport lurch continuously).
- Auto-fit respects user-pan — onMoveEnd stamps a user-pan
  timestamp only when event !== null (ignores programmatic
  fitView) so auto-fits don't self-cancel.
- deletingIds store slice + useOrgDeployState merge gives the
  delete flow the same dim + non-draggable treatment as deploy.
- Platform-level classNames.ts shared by canvas-events +
  useCanvasViewport (DRY'd 3 copies of split/filter/join).

## Server payload change

- org_import.go WORKSPACE_PROVISIONING broadcast now includes
  parent_id + parent-RELATIVE x/y (slotX/slotY) so the canvas
  renders the child at the right parent-nested slot without doing
  any absolute-position walk. createWorkspaceTree signature gains
  relX, relY alongside absX, absY; both call sites updated.

## Tests

- workspace/tests/test_executor_helpers.py — 11 new cases
  covering URI resolution (including traversal rejection),
  attached-file extraction (both Part shapes), manifest-only
  vs multi-modal content, large-image skip, outbound staging,
  dedup, and ensure_workspace_writable (chmod 777 + non-root
  tolerance).
- workspace-server chat_files_test.go — upload validation,
  Content-Disposition escaping, filename sanitisation.
- workspace-server secrets_test.go — SetModel upsert, empty
  clears, invalid UUID rejection.
- tests/e2e/test_chat_attachments_e2e.sh — round-trip against
  a live hermes workspace.
- tests/e2e/test_chat_attachments_multiruntime_e2e.sh — static
  plumbing check + round-trip across hermes/langgraph/claude-code.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Hongming Wang 2026-04-24 13:27:51 -07:00
parent 689578149e
commit 94d9331c76
37 changed files with 3580 additions and 114 deletions

View File

@ -1,5 +1,9 @@
@import "xterm/css/xterm.css";
/* Theme tokens MUST load before any feature stylesheet that
references them so custom properties are in scope. */
@import "../styles/theme-tokens.css";
@import "../styles/settings-panel.css";
@import "../styles/org-deploy.css";
@tailwind base;
@tailwind components;
@ -38,7 +42,20 @@ body {
}
.react-flow__node {
transition: box-shadow 0.2s ease;
/* Transform transition drives the "spawn from parent" motion
org-deploy sets the node's initial position to the parent's
absolute coords, then repositions to the real slot, and this
transition interpolates the translate() in between.
Non-deploy workspace moves (drag, nest) get the same smoothing
for free. */
transition:
box-shadow var(--mol-duration-fast) ease,
transform var(--mol-duration-spawn) var(--mol-easing-bounce-out);
}
/* Drag events must feel instant React Flow adds this class
for the lifetime of the gesture. */
.react-flow__node.dragging {
transition: box-shadow var(--mol-duration-fast) ease;
}
/* Scrollbar styling */

View File

@ -58,14 +58,95 @@ export function Canvas() {
}
function CanvasInner() {
const nodes = useCanvasStore((s) => s.nodes);
const rawNodes = useCanvasStore((s) => s.nodes);
const edges = useCanvasStore((s) => s.edges);
const a2aEdges = useCanvasStore((s) => s.a2aEdges);
const showA2AEdges = useCanvasStore((s) => s.showA2AEdges);
const deletingIds = useCanvasStore((s) => s.deletingIds);
const allEdges = useMemo(
() => (showA2AEdges ? [...edges, ...a2aEdges] : edges),
[edges, a2aEdges, showA2AEdges],
);
// Drag-lock during a system-owned operation (deploy OR delete).
// React Flow respects Node.draggable, which stops the gesture
// before it starts — preventDefault() on the drag-start callback
// isn't authoritative in v12. We project `draggable: false` onto
// each locked node before handing the array to ReactFlow; the
// drag-start handler in useDragHandlers remains as a belt-and-
// braces check.
//
// Perf: short-circuit when nothing is provisioning so the memo
// passes rawNodes through unchanged (identity-stable → RF
// reconciles nothing). When a deploy IS active, build an O(n)
// root index once and re-use it. Critically, do NOT spread every
// node — only mutate the locked ones — so unmodified nodes keep
// their object identity and RF's per-node memo short-circuits.
const nodes = useMemo(() => {
const anyProvisioning = rawNodes.some((n) => n.data.status === "provisioning");
const anyDeleting = deletingIds.size > 0;
if (!anyProvisioning && !anyDeleting) return rawNodes;
const byId = new Map<string, typeof rawNodes[number]>();
for (const n of rawNodes) byId.set(n.id, n);
const rootOf = new Map<string, string>();
const resolveRoot = (id: string): string => {
// Iterative walk guards against a pathological cycle (hostile
// data) — recursion would hit the stack limit on a deep tree.
const visited = new Set<string>();
let cursor: string | null = id;
while (cursor) {
if (visited.has(cursor)) break;
visited.add(cursor);
const cached = rootOf.get(cursor);
if (cached) {
for (const seenId of visited) rootOf.set(seenId, cached);
return cached;
}
const n = byId.get(cursor);
if (!n) break;
if (!n.data.parentId) {
for (const seenId of visited) rootOf.set(seenId, cursor);
return cursor;
}
cursor = n.data.parentId;
}
return id;
};
const provisioningByRoot = new Map<string, number>();
for (const n of rawNodes) {
if (n.data.status !== "provisioning") continue;
const rootId = resolveRoot(n.id);
provisioningByRoot.set(rootId, (provisioningByRoot.get(rootId) ?? 0) + 1);
}
let touched = false;
const next = rawNodes.map((n) => {
const rootId = resolveRoot(n.id);
const deployLocked = n.id !== rootId && (provisioningByRoot.get(rootId) ?? 0) > 0;
// Delete-locked: nothing in a subtree whose DELETE is in
// flight should be draggable, INCLUDING the root of that
// subtree (unlike deploy, there's no cancel — the delete
// is irrevocable at this point).
const deleteLocked = deletingIds.has(n.id);
const shouldLock = deployLocked || deleteLocked;
if (shouldLock && n.draggable !== false) {
touched = true;
return { ...n, draggable: false };
}
if (!shouldLock && n.draggable === false) {
// Node was locked in a prior render; deploy cancelled /
// completed, or delete failed and was reverted. Restore
// default dragability.
touched = true;
const { draggable: _d, ...rest } = n;
void _d;
return rest as typeof n;
}
return n; // identity-preserved
});
return touched ? next : rawNodes;
}, [rawNodes, deletingIds]);
const onNodesChange = useCanvasStore((s) => s.onNodesChange);
const selectNode = useCanvasStore((s) => s.selectNode);
const selectedNodeId = useCanvasStore((s) => s.selectedNodeId);
@ -96,10 +177,36 @@ function CanvasInner() {
if (!pendingDelete) return;
const { id } = pendingDelete;
setPendingDelete(null);
// Compute the full subtree and mark it as "deleting" so every
// node in the chain renders dim + non-draggable during the
// network round-trip + the server-side cascade. Matches the
// deploy-lock UX: once a system-initiated operation owns this
// subtree, the user shouldn't be able to move its pieces
// around until it resolves.
const state = useCanvasStore.getState();
const subtree = new Set<string>();
const stack = [id];
while (stack.length) {
const nid = stack.pop()!;
subtree.add(nid);
for (const n of state.nodes) {
if (n.data.parentId === nid) stack.push(n.id);
}
}
state.beginDelete(subtree);
try {
await api.del(`/workspaces/${id}?confirm=true`);
removeNode(id);
// Server-side cascade will emit WORKSPACE_REMOVED per node;
// handleCanvasEvent drops each from the store. Clear the
// deleting set in one shot once the DELETE resolves so any
// node that lags the WS (or is preserved locally, e.g. an
// external workspace) doesn't stay dimmed forever.
state.endDelete(subtree);
} catch (e) {
// Network or server error — restore the subtree to normal
// interaction and surface the error.
state.endDelete(subtree);
showToast(e instanceof Error ? e.message : "Delete failed", "error");
}
}, [pendingDelete, setPendingDelete, removeNode]);

View File

@ -114,16 +114,32 @@ export function OrgTemplatesSection() {
setError(null);
try {
await importOrgTemplate(org.dir);
// Refresh canvas inline — the WebSocket may be offline, in which case
// WORKSPACE_PROVISIONING broadcasts never arrive and the user sees
// no change from clicking "Import org". A direct fetch guarantees
// the new workspaces land on canvas regardless of WS state.
try {
const workspaces = await api.get<WorkspaceData[]>("/workspaces");
useCanvasStore.getState().hydrate(workspaces);
} catch {
// Rehydrate failure is non-fatal; WS (if alive) or the next
// health-check cycle will eventually pick the new workspaces up.
// Hydrate is the safety net for the "WS is offline" case —
// without live events the canvas stays empty. But calling it
// immediately wipes the org-deploy animation (hydrate rebuilds
// the node array from scratch, dropping the spawn / shimmer
// classes and position tweens). So:
// 1. If the number of nodes on the canvas already matches
// (or exceeds) the template's workspace count, WS
// delivered everything — skip hydrate.
// 2. Otherwise, wait a short window to let any in-flight WS
// events land, then hydrate only if still behind.
const expectedCount = org.workspaces;
// Nodes transition through WORKSPACE_REMOVED which physically
// drops them from the store — there is no "removed" status in
// WorkspaceNodeData — so a simple length check is enough here.
const hasAll = () => useCanvasStore.getState().nodes.length >= expectedCount;
if (!hasAll()) {
await new Promise((r) => setTimeout(r, 1500));
}
if (!hasAll()) {
try {
const workspaces = await api.get<WorkspaceData[]>("/workspaces");
useCanvasStore.getState().hydrate(workspaces);
} catch {
// WS (if alive) or the next health-check cycle will
// eventually pick the new workspaces up.
}
}
showToast(`Imported "${org.name || org.dir}" (${org.workspaces} workspaces)`, "success");
} catch (e) {

View File

@ -6,6 +6,8 @@ import { useCanvasStore, type WorkspaceNodeData } from "@/store/canvas";
import { showToast } from "@/components/Toaster";
import { Tooltip } from "@/components/Tooltip";
import { STATUS_CONFIG, TIER_CONFIG } from "@/lib/design-tokens";
import { useOrgDeployState } from "@/components/canvas/useOrgDeployState";
import { OrgCancelButton } from "@/components/canvas/OrgCancelButton";
/** Descendant count for the "N sub" badge children are first-class nodes
* rendered as full cards inside this one via React Flow's native parentId,
@ -35,6 +37,10 @@ function EjectIcon(props: React.SVGProps<SVGSVGElement>) {
export function WorkspaceNode({ id, data }: NodeProps<Node<WorkspaceNodeData>>) {
const statusCfg = STATUS_CONFIG[data.status] || STATUS_CONFIG.offline;
const tierCfg = TIER_CONFIG[data.tier] || { label: `T${data.tier}`, color: "text-zinc-500 bg-zinc-800" };
// Org-deploy context — four derived flags off one store subscription.
// Drives the shimmer while provisioning, the dimmed/non-draggable
// treatment on locked descendants, and the Cancel pill on the root.
const deploy = useOrgDeployState(id);
const selectedNodeId = useCanvasStore((s) => s.selectedNodeId);
const selectNode = useCanvasStore((s) => s.selectNode);
const openContextMenu = useCanvasStore((s) => s.openContextMenu);
@ -138,8 +144,21 @@ export function WorkspaceNode({ id, data }: NodeProps<Node<WorkspaceNodeData>>)
}
backdrop-blur-sm
focus:outline-none focus-visible:ring-2 focus-visible:ring-blue-500/70 focus-visible:ring-offset-1 focus-visible:ring-offset-zinc-950
${deploy.isActivelyProvisioning ? "mol-deploy-shimmer" : ""}
${deploy.isLockedChild ? "mol-deploy-locked" : ""}
`}
>
{/* Cancel-deployment pill rendered on the root of a deploying
org only. Positioned absolute inside the card so it moves
with drag; class="nodrag" on the button stops React Flow
from treating clicks as a drag start. */}
{deploy.isDeployingRoot && (
<OrgCancelButton
rootId={id}
rootName={data.name}
workspaceCount={deploy.descendantProvisioningCount}
/>
)}
{/* Status gradient bar at top */}
<div className={`absolute inset-x-0 top-0 h-8 bg-gradient-to-b ${statusCfg.bar} pointer-events-none`} />

View File

@ -0,0 +1,165 @@
"use client";
import { useState } from "react";
import { api } from "@/lib/api";
import { useCanvasStore } from "@/store/canvas";
import { showToast } from "@/components/Toaster";
interface Props {
/** Root workspace of the org being deployed. The cancel action
* cascades delete through workspace-server's existing recursive
* delete handler, so we only need the root id. */
rootId: string;
rootName: string;
/** Count rendered in the pill label; updated live as children
* come online (the useOrgDeployState hook recomputes on every
* status change). */
workspaceCount: number;
}
/**
* Cancel-deployment pill attached to the root of a deploying org.
* One click confirm dialog DELETE /workspaces/:rootId?confirm=true
* which cascades through every descendant server-side.
*
* Rendered inside the root's WorkspaceNode card via an absolute-
* positioned overlay so it sits visually ON the card and moves with
* drag. `className="nodrag"` stops React Flow from interpreting
* clicks here as the start of a drag gesture.
*
* Deliberately uses only `.mol-deploy-cancel*` classes for styling
* every color / easing comes from theme-tokens.css, so a future
* light-theme (or tenant-branded theme) inherits automatically.
*/
export function OrgCancelButton({ rootId, rootName, workspaceCount }: Props) {
const [confirming, setConfirming] = useState(false);
const [submitting, setSubmitting] = useState(false);
const handleCancel = async () => {
setSubmitting(true);
// Populate deletingIds with the subtree so every descendant
// (and the root) locks into the dim + non-draggable state for
// the duration of the network round-trip + server cascade —
// same treatment the regular delete gives. Otherwise the org
// looks interactive for the several seconds between click and
// the first WORKSPACE_REMOVED event.
const preState = useCanvasStore.getState();
const subtreeIds = new Set<string>();
const walkStack = [rootId];
while (walkStack.length) {
const nid = walkStack.pop()!;
subtreeIds.add(nid);
for (const n of preState.nodes) {
if (n.data.parentId === nid) walkStack.push(n.id);
}
}
preState.beginDelete(subtreeIds);
try {
await api.del<{ status: string }>(
`/workspaces/${rootId}?confirm=true`,
);
showToast(`Cancelled deployment of "${rootName}"`, "success");
// Optimistic local removal — workspace-server broadcasts
// WORKSPACE_REMOVED per node but the WS may lag; strip the
// subtree now so the user sees immediate feedback. Re-read
// the store AFTER the await: children may have landed (or
// already been removed by WS events) during the network
// round-trip. If the WS_REMOVED handler already dropped the
// root during the network call, bail out — the subtree walk
// would miss any now-orphaned descendants (handleCanvasEvent
// reparents children of a removed node upward, so they no
// longer share the original root's id as parentId).
const postDeleteState = useCanvasStore.getState();
if (!postDeleteState.nodes.some((n) => n.id === rootId)) {
return;
}
const subtree = new Set<string>();
const stack = [rootId];
while (stack.length) {
const id = stack.pop()!;
subtree.add(id);
for (const n of postDeleteState.nodes) {
if (n.data.parentId === id) stack.push(n.id);
}
}
useCanvasStore.setState({
nodes: postDeleteState.nodes.filter((n) => !subtree.has(n.id)),
edges: postDeleteState.edges.filter(
(e) => !subtree.has(e.source) && !subtree.has(e.target),
),
});
} catch (e) {
// Undo the lock so the user can try again / interact with the
// still-deploying subtree.
useCanvasStore.getState().endDelete(subtreeIds);
showToast(
e instanceof Error ? `Cancel failed: ${e.message}` : "Cancel failed",
"error",
);
} finally {
// Success path's endDelete is covered implicitly — every node
// in the subtree is stripped by the optimistic local removal
// above, and any stragglers are removed by WORKSPACE_REMOVED
// WS events whose handler is a no-op on already-missing ids.
// The deletingIds set will naturally empty as endDelete runs
// in both paths below.
useCanvasStore.getState().endDelete(subtreeIds);
setSubmitting(false);
setConfirming(false);
}
};
if (confirming) {
return (
<div
className="nodrag absolute -top-10 right-0 z-20 flex items-center gap-1.5 rounded-lg bg-zinc-900/95 px-2 py-1 shadow-lg border border-red-800/60"
onClick={(e) => e.stopPropagation()}
>
<span className="text-[10px] text-zinc-300">
Delete {workspaceCount} workspace{workspaceCount === 1 ? "" : "s"}?
</span>
<button
type="button"
onClick={handleCancel}
disabled={submitting}
className="mol-deploy-cancel px-2 py-0.5 rounded text-[10px] font-semibold"
>
{submitting ? "Deleting…" : "Yes"}
</button>
<button
type="button"
onClick={() => setConfirming(false)}
disabled={submitting}
className="px-2 py-0.5 rounded bg-zinc-700/80 hover:bg-zinc-600 text-[10px] text-zinc-200"
>
No
</button>
</div>
);
}
return (
<button
type="button"
onClick={(e) => {
// Stop the click from bubbling to React Flow (selects the
// node) — the Cancel pill is a UI surface, not a node
// activation.
e.stopPropagation();
setConfirming(true);
}}
className="nodrag mol-deploy-cancel mol-deploy-cancel-pulse absolute -top-7 right-1 z-20 flex items-center gap-1 rounded-full px-2.5 py-0.5 text-[10px] font-semibold shadow-md"
aria-label={`Cancel deployment of ${rootName}`}
>
<svg width="10" height="10" viewBox="0 0 16 16" aria-hidden="true">
<path
d="M4 4l8 8M12 4l-8 8"
stroke="currentColor"
strokeWidth="2"
strokeLinecap="round"
/>
</svg>
<span>Cancel ({workspaceCount})</span>
</button>
);
}

View File

@ -3,6 +3,7 @@
import { useCallback, useEffect, useRef } from "react";
import { useReactFlow } from "@xyflow/react";
import { useCanvasStore } from "@/store/canvas";
import { appendClass, removeClass } from "@/store/classNames";
import {
CHILD_DEFAULT_HEIGHT,
CHILD_DEFAULT_WIDTH,
@ -30,6 +31,13 @@ export function useCanvasViewport() {
// render so we can detect the boundary when the last one finishes
// and auto-fit the viewport around the whole tree.
const hadProvisioningRef = useRef(false);
// Respect-user-pan gate for the deploy-time auto-fit: whenever the
// user moves the canvas (onMoveEnd stamps userPannedAtRef), we
// compare against the last auto-fit timestamp; if the user moved
// AFTER the last auto-fit, the auto-fit handler bails out for the
// rest of this deploy cycle.
const userPannedAtRef = useRef<number | null>(null);
const lastAutoFitAtRef = useRef(0);
useEffect(() => {
return () => {
@ -55,6 +63,41 @@ export function useCanvasViewport() {
hadProvisioningRef.current = hasProvisioning;
if (wasProvisioning && !hasProvisioning && nodeCount > 0) {
// Root-complete moment — every root that has children just
// finished deploying. Pop + glow once (mol-deploy-root-complete)
// then auto-fit the viewport around the whole org. Leaf-only
// roots (single workspaces with no children) are skipped so the
// effect reads as "your org landed" not "random card flickered".
const state = useCanvasStore.getState();
const rootsWithChildren = new Set<string>();
for (const n of state.nodes) {
if (n.data.parentId) continue;
if (state.nodes.some((c) => c.data.parentId === n.id)) {
rootsWithChildren.add(n.id);
}
}
if (rootsWithChildren.size > 0) {
useCanvasStore.setState({
nodes: state.nodes.map((n) =>
rootsWithChildren.has(n.id)
? { ...n, className: appendClass(n.className, "mol-deploy-root-complete") }
: n,
),
});
// Strip the one-shot class after the keyframe ends so a later
// deploy on the same node can fire it again.
window.setTimeout(() => {
const s = useCanvasStore.getState();
useCanvasStore.setState({
nodes: s.nodes.map((n) =>
rootsWithChildren.has(n.id)
? { ...n, className: removeClass(n.className, "mol-deploy-root-complete") }
: n,
),
});
}, 800);
}
clearTimeout(autoFitTimerRef.current);
// 1200ms settle delay: lets React Flow's DOM measurement pass
// resize newly-online parents before we compute bounds.
@ -63,12 +106,16 @@ export function useCanvasViewport() {
autoFitTimerRef.current = setTimeout(() => {
fitView({
duration: 1200,
padding: 0.25,
// Match the deploy-time fit padding (0.45) so end-state
// and in-flight state use the same framing — otherwise
// the final zoom-out "jumps" relative to the intermediate
// fits and looks like a mis-layout.
padding: 0.45,
// Cap zoom-in: a small tree (2-3 nodes) would otherwise end
// up at the 2x maxZoom, visually implying "something is
// wrong". 0.8 reads like "here's your whole org" even when
// the tree is small.
maxZoom: 0.8,
// wrong". 0.65 reads like "here's your whole org" even when
// the tree is small — matches deploy-time cap.
maxZoom: 0.65,
// Cap zoom-out: fitView would fall back to the component's
// minZoom=0.1 on a sparse/outlier layout, leaving the user
// staring at a postage-stamp canvas. 0.25 is the floor.
@ -92,6 +139,82 @@ export function useCanvasViewport() {
return () => window.removeEventListener("molecule:pan-to-node", handler);
}, [fitView]);
// Auto pan+zoom to the whole deploying org after each child
// arrival — DEBOUNCED. Firing fitView on every event with a
// 600ms animation meant rapid sibling arrivals (server paces 2s
// apart, HMR bursts can land faster) made the viewport lurch
// continuously, which the user read as "parent flashing around".
// We now wait until the arrivals GO QUIET for 500ms, then run
// exactly one fit. The rootId we captured on the most recent
// event drives the fit bounds. Respect-user-pan still short-
// circuits: if the user moved after our last auto-fit, we never
// fit again this deploy.
const pendingFitRootRef = useRef<string | null>(null);
useEffect(() => {
const runFit = () => {
const rootCandidate = pendingFitRootRef.current;
pendingFitRootRef.current = null;
if (!rootCandidate) return;
if (
userPannedAtRef.current !== null &&
userPannedAtRef.current > lastAutoFitAtRef.current
) {
return;
}
const state = useCanvasStore.getState();
// Climb to the true root — the event's rootId is the just-
// landed child's direct parent, which may itself be nested.
let topId = rootCandidate;
let cursor = state.nodes.find((n) => n.id === topId);
while (cursor?.data.parentId) {
const up = state.nodes.find((n) => n.id === cursor!.data.parentId);
if (!up) break;
cursor = up;
topId = up.id;
}
const subtree: string[] = [];
const stack = [topId];
while (stack.length) {
const id = stack.pop()!;
subtree.push(id);
for (const n of state.nodes) {
if (n.data.parentId === id) stack.push(n.id);
}
}
if (subtree.length === 0) return;
fitView({
nodes: subtree.map((id) => ({ id })),
duration: 600,
// Generous padding so the right-hand Communications panel,
// bottom-left Legend, and bottom-right "New Workspace"
// button don't cover the outer cards. React Flow padding
// is a fraction of viewport dims, so 0.45 ≈ ~430px of
// margin on a 960-wide canvas — enough clearance for the
// two side panels (~300px + ~280px).
padding: 0.45,
// Lower maxZoom so small orgs (2-3 cards) still zoom out
// enough to show the parent frame + children clearly with
// the padded margins. 0.65 reads as "here's the whole org"
// without getting dragged to the maxZoom by fitView's
// "fill the viewport" default.
maxZoom: 0.65,
minZoom: 0.25,
});
lastAutoFitAtRef.current = Date.now();
};
const handler = (e: Event) => {
const { rootId } = (e as CustomEvent<{ rootId: string }>).detail;
// Keep the most recently-requested root — if the user triggers
// imports on two different orgs back-to-back, the later one
// wins the viewport, which matches user intent.
pendingFitRootRef.current = rootId;
clearTimeout(autoFitTimerRef.current);
autoFitTimerRef.current = setTimeout(runFit, 500);
};
window.addEventListener("molecule:fit-deploying-org", handler);
return () => window.removeEventListener("molecule:fit-deploying-org", handler);
}, [fitView]);
// Zoom to a team: fit the parent + its direct children in view.
useEffect(() => {
const handler = (e: Event) => {
@ -128,7 +251,16 @@ export function useCanvasViewport() {
}, [fitBounds]);
const onMoveEnd = useCallback(
(_event: unknown, vp: { x: number; y: number; zoom: number }) => {
(event: unknown, vp: { x: number; y: number; zoom: number }) => {
// Stamp user-pan timestamp only when the move was actually
// initiated by the user (mouse / trackpad / keyboard). React
// Flow also fires onMoveEnd for programmatic fitView() calls
// — `event` is null in that case, which would otherwise
// defeat the respect-user-pan gate by making every auto-fit
// look like a user move.
if (event !== null) {
userPannedAtRef.current = Date.now();
}
clearTimeout(saveTimerRef.current);
saveTimerRef.current = setTimeout(() => {
saveViewport(vp.x, vp.y, vp.zoom);

View File

@ -113,6 +113,18 @@ export function useDragHandlers(): DragHandlers {
const onNodeDragStart: OnNodeDrag<WorkspaceNode> = useCallback(
(event, node) => {
// Belt-and-braces drag-lock: the primary mechanism is the
// `draggable: false` projection in Canvas.tsx — React Flow
// won't invoke this callback for locked nodes. But a future
// change to the projection that forgets a locked subtree
// would silently allow dragging, and locked drags mid-deploy
// corrupt the spawn animation. Fall through to a state-based
// check here so the invariant stays enforced in both places.
if (node.draggable === false) {
dragStartStateRef.current = null;
return;
}
dragModifiersRef.current = {
alt: event.altKey,
meta: event.metaKey || event.ctrlKey,

View File

@ -0,0 +1,152 @@
"use client";
import { useMemo } from "react";
import { useCanvasStore } from "@/store/canvas";
/**
* Org-deploy state for a single workspace node. Computed from the
* current canvas store snapshot no per-org status field on the
* backend is required (a root "is deploying" iff any descendant in
* its subtree still reports status === "provisioning").
*
* Performance note: the first version of this hook walked the entire
* nodes array per node render O(n²) for a 50-node org. The current
* implementation computes ONE map of derived state for the whole
* canvas per nodes-array change, then each call site looks up its
* own id. The map is built inside useMemo against a cheap projection
* (id + parentId + status tuples via useShallow) so unrelated store
* mutations (drag, selection, viewport) don't re-run the walk.
*/
export interface OrgDeployState {
isActivelyProvisioning: boolean;
isDeployingRoot: boolean;
isLockedChild: boolean;
descendantProvisioningCount: number;
}
const EMPTY: OrgDeployState = {
isActivelyProvisioning: false,
isDeployingRoot: false,
isLockedChild: false,
descendantProvisioningCount: 0,
};
/** Projection used to drive the deploy-state computation. Shallow-
* compared so re-renders only happen when one of these fields
* actually changes across any node. */
interface NodeProjection {
id: string;
parentId: string | null;
status: string;
}
function buildDeployMap(
projections: NodeProjection[],
deletingIds: ReadonlySet<string>,
): Map<string, OrgDeployState> {
const byId = new Map<string, NodeProjection>();
const childrenBy = new Map<string, string[]>();
for (const p of projections) {
byId.set(p.id, p);
if (p.parentId) {
const arr = childrenBy.get(p.parentId) ?? [];
arr.push(p.id);
childrenBy.set(p.parentId, arr);
}
}
// Walk once from each node up to its root, memoising the root id.
// `rootOf.get(id)` short-circuits further walks on the same chain.
const rootOf = new Map<string, string>();
const findRoot = (id: string): string => {
const cached = rootOf.get(id);
if (cached) return cached;
let cursor: NodeProjection | undefined = byId.get(id);
let rootId = id;
while (cursor && cursor.parentId) {
const parent = byId.get(cursor.parentId);
if (!parent) break;
cursor = parent;
rootId = parent.id;
const alreadyKnown = rootOf.get(rootId);
if (alreadyKnown) {
rootId = alreadyKnown;
break;
}
}
rootOf.set(id, rootId);
return rootId;
};
// Count provisioning descendants per node. Also walk once per root
// using an iterative DFS so we don't stack-overflow on deep trees.
const countProvisioning = (rootId: string): number => {
let count = 0;
const stack = [rootId];
while (stack.length) {
const id = stack.pop()!;
const node = byId.get(id);
if (!node) continue;
if (node.status === "provisioning") count++;
const kids = childrenBy.get(id);
if (kids) stack.push(...kids);
}
return count;
};
// Per-root cache of subtree count so every descendant resolves in O(1).
const rootCount = new Map<string, number>();
const out = new Map<string, OrgDeployState>();
for (const p of projections) {
const rootId = findRoot(p.id);
let provCount = rootCount.get(rootId);
if (provCount === undefined) {
provCount = countProvisioning(rootId);
rootCount.set(rootId, provCount);
}
const rootIsDeploying = provCount > 0;
// A node being deleted gets the same visual + interaction lock
// as a deploying child. "The system owns this node right now,
// don't touch it" is the shared semantic — the user only cares
// that the card is dim and won't drag; they don't need to know
// whether it's coming up or going down.
const deleting = deletingIds.has(p.id);
out.set(p.id, {
isActivelyProvisioning: p.status === "provisioning",
isDeployingRoot: p.id === rootId && rootIsDeploying,
isLockedChild: deleting || (p.id !== rootId && rootIsDeploying),
descendantProvisioningCount:
p.id === rootId ? provCount : 0, // only roots display the count
});
}
return out;
}
/** Store-wide derived map. Recomputed whenever the `nodes` array
* reference changes which is on every store mutation that touches
* nodes, including pure position tweens. The map build is O(n) so
* a 50-node canvas costs ~50μs per tween frame; that's cheap enough
* to not need a projection layer. (An earlier attempt to narrow the
* subscription via `useShallow((s) => s.nodes.map(...))` triggered
* React 18's "getSnapshot should be cached" loop because the
* projection creates fresh object references each call shallow
* equality always sees "changed", which re-renders, which re-runs
* the selector, ad infinitum.) */
function useDeployMap(): Map<string, OrgDeployState> {
const nodes = useCanvasStore((s) => s.nodes);
const deletingIds = useCanvasStore((s) => s.deletingIds);
return useMemo(() => {
const projections = nodes.map((n) => ({
id: n.id,
parentId: n.data.parentId,
status: n.data.status,
}));
return buildDeployMap(projections, deletingIds);
}, [nodes, deletingIds]);
}
export function useOrgDeployState(nodeId: string): OrgDeployState {
const map = useDeployMap();
return map.get(nodeId) ?? EMPTY;
}

View File

@ -7,8 +7,10 @@ import { api } from "@/lib/api";
import { useCanvasStore, type WorkspaceNodeData } from "@/store/canvas";
import { WS_URL } from "@/store/socket";
import { closeWebSocketGracefully } from "@/lib/ws-close";
import { type ChatMessage, createMessage, appendMessageDeduped } from "./chat/types";
import { extractResponseText, extractRequestText } from "./chat/message-parser";
import { type ChatMessage, type ChatAttachment, createMessage, appendMessageDeduped } from "./chat/types";
import { uploadChatFiles, downloadChatFile } from "./chat/uploads";
import { AttachmentChip, PendingAttachmentPill } from "./chat/AttachmentViews";
import { extractResponseText, extractRequestText, extractFilesFromTask } from "./chat/message-parser";
import { AgentCommsPanel } from "./chat/AgentCommsPanel";
import { runtimeDisplayName } from "@/lib/runtime-names";
import { ConfirmDialog } from "@/components/ConfirmDialog";
@ -21,10 +23,18 @@ interface Props {
type ChatSubTab = "my-chat" | "agent-comms";
// A2A response shape (subset). The full schema is in @a2a-js/sdk but we only
// need parts/artifacts text extraction for the synchronous fallback path.
// need parts/artifacts text + file extraction for the synchronous fallback.
interface A2AFileRef {
name?: string;
mimeType?: string;
uri?: string;
bytes?: string;
size?: number;
}
interface A2APart {
kind: string;
text: string;
text?: string;
file?: A2AFileRef;
}
interface A2AResponse {
result?: {
@ -39,19 +49,25 @@ function extractReplyText(resp: A2AResponse): string {
const result = resp?.result;
if (result?.parts) {
for (const p of result.parts) {
if (p.kind === "text") return p.text;
if (p.kind === "text") return p.text ?? "";
}
}
if (result?.artifacts) {
for (const a of result.artifacts) {
for (const p of a.parts || []) {
if (p.kind === "text") return p.text;
if (p.kind === "text") return p.text ?? "";
}
}
}
return "";
}
// Agent-returned files live on the same response shape as text —
// delegated to extractFilesFromTask in message-parser.ts, which also
// walks status.message.parts (that ChatTab's legacy text extractor
// doesn't). Single source of truth for file-part parsing across
// live chat, activity log replay, and any future consumers.
/**
* Load chat history from the activity_logs database via the platform API.
* Uses source=canvas to only get user-initiated messages (not agent-to-agent).
@ -75,12 +91,19 @@ async function loadMessagesFromDB(workspaceId: string): Promise<{ messages: Chat
messages.push(createMessage("user", userText));
}
// Extract agent response
// Extract agent response — text AND any file attachments so a
// chat reload surfaces historical download chips, not just plain
// text. `result` is nested on successful A2A responses; some
// older rows stored the raw `result` payload at the top level,
// so fall back to the body itself when `.result` is absent.
if (a.response_body) {
const text = extractResponseText(a.response_body);
if (text) {
const attachments = extractFilesFromTask(
(a.response_body.result ?? a.response_body) as Record<string, unknown>,
);
if (text || attachments.length > 0) {
const role = a.status === "error" || text.toLowerCase().startsWith("agent error") ? "system" : "agent";
messages.push({ ...createMessage(role, text), timestamp: a.created_at });
messages.push({ ...createMessage(role, text, attachments), timestamp: a.created_at });
}
}
}
@ -178,7 +201,16 @@ export function ChatTab({ workspaceId, data }: Props) {
function MyChatPanel({ workspaceId, data }: Props) {
const [messages, setMessages] = useState<ChatMessage[]>([]);
const [input, setInput] = useState("");
const [sending, setSending] = useState(!!data.currentTask);
// `sending` is strictly the "this tab kicked off a send and hasn't
// seen the reply yet" signal. Previously this was initialized from
// data.currentTask to pick up in-flight agent work on mount, but
// that conflated agent-busy (workspace heartbeat) with user-
// in-flight (local send): when the WS dropped a TASK_COMPLETE event,
// currentTask lingered, the component re-mounted with sending=true,
// and the Send button stayed disabled forever even though nothing
// local was in flight. For the "agent is busy, show spinner" UX,
// use data.currentTask directly in the render path.
const [sending, setSending] = useState(false);
const [thinkingElapsed, setThinkingElapsed] = useState(0);
const [activityLog, setActivityLog] = useState<string[]>([]);
const [loading, setLoading] = useState(true);
@ -189,6 +221,17 @@ function MyChatPanel({ workspaceId, data }: Props) {
const [error, setError] = useState<string | null>(null);
const [confirmRestart, setConfirmRestart] = useState(false);
const bottomRef = useRef<HTMLDivElement>(null);
// Files the user has picked but not yet sent. Cleared on send
// (upload success) or by the × on each pill.
const [pendingFiles, setPendingFiles] = useState<File[]>([]);
const [uploading, setUploading] = useState(false);
const fileInputRef = useRef<HTMLInputElement>(null);
// Guard against a double-click during the upload phase: React
// state updates from the click that started the upload haven't
// flushed yet, so the disabled-button logic sees `uploading=false`
// from the closure and lets a second `sendMessage` enter. A ref
// observes the latest value synchronously.
const sendInFlightRef = useRef(false);
// Load chat history from database on mount
useEffect(() => {
@ -231,8 +274,10 @@ function MyChatPanel({ workspaceId, data }: Props) {
// Dedupe in case the agent proactively pushed the same text the
// HTTP /a2a response already delivered (observed with the Hermes
// runtime, which emits both a reply body and a send_message_to_user
// push for the same content).
setMessages((prev) => appendMessageDeduped(prev, createMessage("agent", m.content)));
// push for the same content). Attachments ride along with the
// message so files returned by the A2A_RESPONSE WS path render
// their download chips.
setMessages((prev) => appendMessageDeduped(prev, createMessage("agent", m.content, m.attachments)));
}
if (sendingFromAPIRef.current && msgs.length > 0) {
setSending(false);
@ -339,10 +384,35 @@ function MyChatPanel({ workspaceId, data }: Props) {
const sendMessage = async () => {
const text = input.trim();
if (!text || !agentReachable || sending) return;
const filesToSend = pendingFiles;
// Allow sending if EITHER text OR attachments are present — a user
// can drop a file with no text and the agent still receives it.
if ((!text && filesToSend.length === 0) || !agentReachable || sending || uploading) return;
// Synchronous re-entry guard — see sendInFlightRef comment.
if (sendInFlightRef.current) return;
sendInFlightRef.current = true;
// Upload attachments first so we can include URIs in the A2A
// message parts. Sequential-before-send: a message with references
// to files not yet staged would fail agent-side; staging happens
// synchronously via /chat/uploads before message/send dispatch.
let uploaded: ChatAttachment[] = [];
if (filesToSend.length > 0) {
setUploading(true);
try {
uploaded = await uploadChatFiles(workspaceId, filesToSend);
} catch (e) {
setUploading(false);
sendInFlightRef.current = false;
setError(e instanceof Error ? `Upload failed: ${e.message}` : "Upload failed");
return;
}
setUploading(false);
}
setInput("");
setMessages((prev) => [...prev, createMessage("user", text)]);
setPendingFiles([]);
setMessages((prev) => [...prev, createMessage("user", text, uploaded)]);
setSending(true);
sendingFromAPIRef.current = true;
setError(null);
@ -356,40 +426,141 @@ function MyChatPanel({ workspaceId, data }: Props) {
parts: [{ kind: "text", text: m.content }],
}));
// A2A parts: text part (if any) + file parts (per attachment). The
// agent sees both in a single turn, matching the A2A spec shape.
const parts: A2APart[] = [];
if (text) parts.push({ kind: "text", text });
for (const att of uploaded) {
parts.push({
kind: "file",
file: {
name: att.name,
mimeType: att.mimeType,
uri: att.uri,
size: att.size,
},
});
}
// A2A calls can legitimately take minutes — LLM latency +
// multi-turn tool use is common on slower providers (Hermes+minimax,
// Claude Code invoking bash/file tools, etc.). The 15s default
// would silently abort the fetch here, leaving the server to
// complete the reply and the user staring at
// "agent may be unreachable". Match the upload timeout (60s × 2)
// for the happy-path ceiling; anything longer is genuinely stuck.
api.post<A2AResponse>(`/workspaces/${workspaceId}/a2a`, {
method: "message/send",
params: {
message: {
role: "user",
messageId: crypto.randomUUID(),
parts: [{ kind: "text", text }],
parts,
},
metadata: { history },
},
})
}, { timeoutMs: 120_000 })
.then((resp) => {
// Skip if the WS A2A_RESPONSE event already handled this response.
// Both paths (WS + HTTP) check sendingFromAPIRef — whichever clears
// it first wins, the other becomes a no-op (no duplicate messages).
if (!sendingFromAPIRef.current) return;
const replyText = extractReplyText(resp);
if (replyText) {
setMessages((prev) => appendMessageDeduped(prev, createMessage("agent", replyText)));
const replyFiles = extractFilesFromTask((resp?.result ?? {}) as Record<string, unknown>);
if (replyText || replyFiles.length > 0) {
setMessages((prev) =>
appendMessageDeduped(prev, createMessage("agent", replyText, replyFiles)),
);
}
setSending(false);
sendingFromAPIRef.current = false;
sendInFlightRef.current = false;
})
.catch(() => {
setSending(false);
sendingFromAPIRef.current = false;
sendInFlightRef.current = false;
setError("Failed to send message — agent may be unreachable");
});
};
const onFilesPicked = (fileList: FileList | null) => {
if (!fileList) return;
const picked = Array.from(fileList);
// Deduplicate against current pending set by name+size — user
// picking the same file twice shouldn't append it.
setPendingFiles((prev) => {
const keyed = new Set(prev.map((f) => `${f.name}:${f.size}`));
return [...prev, ...picked.filter((f) => !keyed.has(`${f.name}:${f.size}`))];
});
if (fileInputRef.current) fileInputRef.current.value = "";
};
const removePendingFile = (index: number) =>
setPendingFiles((prev) => prev.filter((_, i) => i !== index));
// Drag-and-drop staging. dragDepthRef counts enter vs leave events so
// the overlay doesn't flicker when the cursor crosses nested children
// (textarea, buttons) — dragenter/dragleave fire for every boundary.
const [dragOver, setDragOver] = useState(false);
const dragDepthRef = useRef(0);
const dropEnabled = agentReachable && !sending && !uploading;
const isFileDrag = (e: React.DragEvent) =>
Array.from(e.dataTransfer.types || []).includes("Files");
const onDragEnter = (e: React.DragEvent) => {
if (!dropEnabled || !isFileDrag(e)) return;
e.preventDefault();
dragDepthRef.current += 1;
setDragOver(true);
};
const onDragOver = (e: React.DragEvent) => {
if (!dropEnabled || !isFileDrag(e)) return;
e.preventDefault();
e.dataTransfer.dropEffect = "copy";
};
const onDragLeave = (e: React.DragEvent) => {
if (!dropEnabled || !isFileDrag(e)) return;
dragDepthRef.current = Math.max(0, dragDepthRef.current - 1);
if (dragDepthRef.current === 0) setDragOver(false);
};
const onDrop = (e: React.DragEvent) => {
if (!dropEnabled || !isFileDrag(e)) return;
e.preventDefault();
dragDepthRef.current = 0;
setDragOver(false);
onFilesPicked(e.dataTransfer.files);
};
const downloadAttachment = (att: ChatAttachment) => {
// Errors here are rare but user-visible (401 on a revoked token,
// 404 if the agent deleted the file). Surface via the inline
// error banner — the message list itself stays untouched.
downloadChatFile(workspaceId, att).catch((e) => {
setError(e instanceof Error ? `Download failed: ${e.message}` : "Download failed");
});
};
const isOnline = data.status === "online" || data.status === "degraded";
return (
<div className="flex flex-col h-full">
<div
className="flex flex-col h-full relative"
onDragEnter={onDragEnter}
onDragOver={onDragOver}
onDragLeave={onDragLeave}
onDrop={onDrop}
>
{dragOver && (
<div
className="absolute inset-0 z-20 flex items-center justify-center bg-blue-500/10 border-2 border-dashed border-blue-400 rounded pointer-events-none"
aria-live="polite"
>
<div className="bg-zinc-900/90 border border-blue-400/50 rounded-lg px-4 py-2 text-xs text-blue-200">
Drop to attach
</div>
</div>
)}
{/* Messages */}
<div className="flex-1 overflow-y-auto p-3 space-y-3">
{loading && (
@ -435,9 +606,23 @@ function MyChatPanel({ workspaceId, data }: Props) {
: "bg-zinc-800/80 text-zinc-200 border border-zinc-700/30"
}`}
>
<div className="prose prose-sm prose-invert max-w-none [&>p]:mb-1 [&>p:last-child]:mb-0">
<ReactMarkdown remarkPlugins={[remarkGfm]}>{msg.content}</ReactMarkdown>
</div>
{msg.content && (
<div className="prose prose-sm prose-invert max-w-none [&>p]:mb-1 [&>p:last-child]:mb-0">
<ReactMarkdown remarkPlugins={[remarkGfm]}>{msg.content}</ReactMarkdown>
</div>
)}
{msg.attachments && msg.attachments.length > 0 && (
<div className={`flex flex-wrap gap-1 ${msg.content ? "mt-1.5" : ""}`}>
{msg.attachments.map((att, i) => (
<AttachmentChip
key={`${msg.id}-${i}`}
attachment={att}
onDownload={downloadAttachment}
tone={msg.role === "user" ? "user" : "agent"}
/>
))}
</div>
)}
<div className="text-[9px] text-zinc-500 mt-1">
{new Date(msg.timestamp).toLocaleTimeString()}
</div>
@ -445,8 +630,11 @@ function MyChatPanel({ workspaceId, data }: Props) {
</div>
))}
{/* Thinking indicator */}
{sending && (
{/* Thinking indicator shows when this tab is awaiting a reply
OR when the workspace heartbeat reports an in-flight task
(covers the "agent is already busy when I open the tab" case
without locking the Send button on a stale currentTask). */}
{(sending || !!data.currentTask) && (
<div className="flex justify-start">
<div className="bg-zinc-800/50 border border-zinc-700/30 rounded-lg px-3 py-2 max-w-[85%]">
<div className="flex items-center gap-2 text-xs text-zinc-400">
@ -490,7 +678,37 @@ function MyChatPanel({ workspaceId, data }: Props) {
{/* Input */}
<div className="p-3 border-t border-zinc-800">
<div className="flex gap-2">
{pendingFiles.length > 0 && (
<div className="flex flex-wrap gap-1.5 mb-2">
{pendingFiles.map((f, i) => (
<PendingAttachmentPill
key={`${f.name}-${f.size}-${i}`}
file={f}
onRemove={() => removePendingFile(i)}
/>
))}
</div>
)}
<div className="flex gap-2 items-end">
<input
ref={fileInputRef}
type="file"
multiple
className="hidden"
onChange={(e) => onFilesPicked(e.target.files)}
aria-hidden="true"
/>
<button
onClick={() => fileInputRef.current?.click()}
disabled={!agentReachable || sending || uploading}
aria-label="Attach file"
title="Attach file"
className="p-2 bg-zinc-800 hover:bg-zinc-700 border border-zinc-700 rounded-lg text-zinc-400 hover:text-zinc-200 transition-colors shrink-0 disabled:opacity-40"
>
<svg width="14" height="14" viewBox="0 0 16 16" fill="none" aria-hidden="true">
<path d="M11 6.5 7 10.5a2 2 0 1 0 2.8 2.8l4-4a3.5 3.5 0 0 0-5-5l-4.5 4.5a5 5 0 0 0 7 7l4-4" stroke="currentColor" strokeWidth="1.4" strokeLinecap="round" strokeLinejoin="round" />
</svg>
</button>
<textarea
aria-label="Message to agent"
value={input}
@ -508,10 +726,10 @@ function MyChatPanel({ workspaceId, data }: Props) {
/>
<button
onClick={sendMessage}
disabled={!input.trim() || !agentReachable || sending}
disabled={(!input.trim() && pendingFiles.length === 0) || !agentReachable || sending || uploading}
className="px-4 py-2 bg-blue-600 hover:bg-blue-500 text-xs font-medium rounded-lg text-white disabled:opacity-30 transition-colors shrink-0"
>
Send
{uploading ? "Uploading…" : "Send"}
</button>
</div>
</div>

View File

@ -104,12 +104,17 @@ interface RuntimeOption {
// Fallback used when /templates can't be fetched (offline, older backend).
// Keep in sync with manifest.json workspace_templates as a defensive default.
// Model + env suggestions only flow when the backend is reachable.
//
// Runtimes that manage their own config outside the platform's config.yaml
// template. For these, a missing config.yaml is expected — the user manages
// config via the runtime's own mechanism (e.g. hermes edits
// ~/.hermes/config.yaml on the workspace EC2 via the Terminal tab or its
// own CLI). Showing a "No config.yaml found" error for these is misleading.
const RUNTIMES_WITH_OWN_CONFIG = new Set<string>(["hermes", "external"]);
// template. For these, a missing config.yaml is expected and the form
// genuinely can't edit the runtime's settings (there's no platform file
// to write). Hermes is NOT on this list: it DOES ship a platform
// config.yaml via workspace-configs-templates/hermes that controls model,
// runtime_config, required_env, etc. Editing it through this form is
// exactly the point of the platform adaptor. The deep `~/.hermes/
// config.yaml` on the container is a separate runtime-internal file,
// not this one.
const RUNTIMES_WITH_OWN_CONFIG = new Set<string>(["external"]);
const FALLBACK_RUNTIME_OPTIONS: RuntimeOption[] = [
{ value: "", label: "LangGraph (default)", models: [] },
@ -151,9 +156,11 @@ export function ConfigTab({ workspaceId }: Props) {
// default `LangGraph`. See GH #1894.
let wsMetadataRuntime = "";
let wsMetadataModel = "";
let wsMetadataTier: number | null = null;
try {
const ws = await api.get<{ runtime?: string }>(`/workspaces/${workspaceId}`);
const ws = await api.get<{ runtime?: string; tier?: number }>(`/workspaces/${workspaceId}`);
wsMetadataRuntime = (ws.runtime || "").trim();
if (typeof ws.tier === "number") wsMetadataTier = ws.tier;
} catch { /* fall back to config.yaml */ }
try {
const m = await api.get<{ model?: string }>(`/workspaces/${workspaceId}/model`);
@ -165,11 +172,15 @@ export function ConfigTab({ workspaceId }: Props) {
const parsed = parseYaml(res.content);
setOriginalYaml(res.content);
setRawDraft(res.content);
// Merge: config.yaml wins for fields it declares, but workspace metadata
// wins for runtime + model when config.yaml doesn't set them.
// Merge: workspace-row metadata is authoritative for the DB-backed
// fields (tier, runtime, model). config.yaml often lags — handleSave
// PATCHes tier/runtime directly and a template snapshot in the
// container can differ from the live row. Show the DB value so the
// form doesn't contradict the node badge (issue: badge=T3, form=T2).
const merged = { ...DEFAULT_CONFIG, ...parsed } as ConfigData;
if (!merged.runtime && wsMetadataRuntime) merged.runtime = wsMetadataRuntime;
if (!merged.model && wsMetadataModel) merged.model = wsMetadataModel;
if (wsMetadataRuntime) merged.runtime = wsMetadataRuntime;
if (wsMetadataModel) merged.model = wsMetadataModel;
if (wsMetadataTier !== null) merged.tier = wsMetadataTier;
setConfig(merged);
} catch {
// No platform-managed config.yaml. Some runtimes (hermes, external)
@ -184,6 +195,7 @@ export function ConfigTab({ workspaceId }: Props) {
...DEFAULT_CONFIG,
runtime: wsMetadataRuntime,
model: wsMetadataModel,
...(wsMetadataTier !== null ? { tier: wsMetadataTier } : {}),
} as ConfigData);
} finally {
setLoading(false);

View File

@ -0,0 +1,94 @@
"use client";
// Small presentational components for chat attachments. Kept in a
// separate file so ChatTab.tsx stays focused on state + send/receive
// orchestration. Both variants share the file-icon + name + size
// layout; the only difference is the trailing action (remove for
// pending, download for completed).
import type { ChatAttachment } from "./types";
function formatSize(bytes: number | undefined): string {
if (bytes == null) return "";
if (bytes < 1024) return `${bytes} B`;
if (bytes < 1024 * 1024) return `${(bytes / 1024).toFixed(0)} KB`;
return `${(bytes / (1024 * 1024)).toFixed(1)} MB`;
}
/** Inline pill for a file that the user has picked but not yet sent.
* Renders above the textarea; clicking × pops it from the pending
* list without uploading. */
export function PendingAttachmentPill({
file,
onRemove,
}: {
file: File;
onRemove: () => void;
}) {
return (
<div className="flex items-center gap-1.5 rounded-md border border-zinc-700/60 bg-zinc-800/80 px-2 py-1 text-[10px] text-zinc-300 max-w-[200px]">
<FileGlyph className="text-zinc-400 shrink-0" />
<span className="truncate" title={file.name}>{file.name}</span>
<span className="text-zinc-500 shrink-0 tabular-nums">{formatSize(file.size)}</span>
<button
onClick={onRemove}
aria-label={`Remove ${file.name}`}
className="ml-0.5 text-zinc-500 hover:text-zinc-200 transition-colors shrink-0"
>
<svg width="10" height="10" viewBox="0 0 16 16" fill="none" aria-hidden="true">
<path d="M4 4l8 8M12 4l-8 8" stroke="currentColor" strokeWidth="1.6" strokeLinecap="round" />
</svg>
</button>
</div>
);
}
/** Chip rendered inside a message bubble for a sent/received file.
* Clicking triggers the download via the passed onDownload callback
* so the parent controls workspace-scoped URL resolution. */
export function AttachmentChip({
attachment,
onDownload,
tone,
}: {
attachment: ChatAttachment;
onDownload: (a: ChatAttachment) => void;
tone: "user" | "agent";
}) {
const toneClasses =
tone === "user"
? "border-blue-400/30 bg-blue-600/20 hover:bg-blue-600/30 text-blue-100"
: "border-zinc-600/50 bg-zinc-700/40 hover:bg-zinc-600/50 text-zinc-100";
return (
<button
onClick={() => onDownload(attachment)}
title={`Download ${attachment.name}`}
className={`flex items-center gap-1.5 rounded-md border px-2 py-1 text-[10px] transition-colors max-w-full ${toneClasses}`}
>
<FileGlyph className="shrink-0 opacity-70" />
<span className="truncate">{attachment.name}</span>
{attachment.size != null && (
<span className="opacity-60 shrink-0 tabular-nums">{formatSize(attachment.size)}</span>
)}
<DownloadGlyph className="opacity-70 shrink-0" />
</button>
);
}
function FileGlyph({ className }: { className?: string }) {
return (
<svg width="10" height="10" viewBox="0 0 16 16" fill="none" className={className} aria-hidden="true">
<path d="M4 2h5l3 3v9a1 1 0 0 1-1 1H4a1 1 0 0 1-1-1V3a1 1 0 0 1 1-1Z" stroke="currentColor" strokeWidth="1.3" strokeLinejoin="round" />
<path d="M9 2v3h3" stroke="currentColor" strokeWidth="1.3" strokeLinejoin="round" />
</svg>
);
}
function DownloadGlyph({ className }: { className?: string }) {
return (
<svg width="10" height="10" viewBox="0 0 16 16" fill="none" className={className} aria-hidden="true">
<path d="M8 2v9M4 7l4 4 4-4" stroke="currentColor" strokeWidth="1.4" strokeLinecap="round" strokeLinejoin="round" />
<path d="M3 13h10" stroke="currentColor" strokeWidth="1.4" strokeLinecap="round" />
</svg>
);
}

View File

@ -4,6 +4,7 @@ import {
extractResponseText,
extractAgentText,
extractTextsFromParts,
extractFilesFromTask,
} from "../message-parser";
describe("extractRequestText", () => {
@ -133,3 +134,71 @@ describe("extractTextsFromParts", () => {
expect(extractTextsFromParts(parts)).toBe("Only text");
});
});
describe("extractFilesFromTask", () => {
it("pulls A2A file parts out of a result", () => {
const task = {
parts: [
{ kind: "text", text: "here's the report" },
{
kind: "file",
file: { name: "report.pdf", mimeType: "application/pdf", uri: "workspace:/reports/report.pdf", size: 4096 },
},
],
};
const files = extractFilesFromTask(task);
expect(files).toEqual([
{ name: "report.pdf", mimeType: "application/pdf", uri: "workspace:/reports/report.pdf", size: 4096 },
]);
});
it("recovers a filename from the URI when `name` is absent", () => {
const task = {
parts: [
{ kind: "file", file: { uri: "workspace:/workspace/out/graph.png" } },
],
};
const files = extractFilesFromTask(task);
expect(files[0].name).toBe("graph.png");
});
it("skips file parts without a URI (inline bytes are not supported yet)", () => {
const task = {
parts: [
{ kind: "file", file: { name: "inline.bin", bytes: "AAA=" } },
],
};
expect(extractFilesFromTask(task)).toEqual([]);
});
it("walks artifacts[] so file parts nested inside artifact envelopes are found", () => {
const task = {
artifacts: [
{
parts: [
{ kind: "file", file: { name: "trace.log", uri: "workspace:/logs/trace.log" } },
],
},
],
};
const files = extractFilesFromTask(task);
expect(files[0]).toMatchObject({ name: "trace.log", uri: "workspace:/logs/trace.log" });
});
it("returns [] on malformed input rather than throwing", () => {
expect(extractFilesFromTask({})).toEqual([]);
expect(extractFilesFromTask({ parts: "not-an-array" } as unknown as Record<string, unknown>)).toEqual([]);
});
it("walks result.message.parts — the non-task reply shape some A2A servers use", () => {
const task = {
message: {
parts: [
{ kind: "file", file: { name: "out.txt", uri: "workspace:/workspace/out.txt" } },
],
},
};
const files = extractFilesFromTask(task);
expect(files[0]).toMatchObject({ name: "out.txt", uri: "workspace:/workspace/out.txt" });
});
});

View File

@ -0,0 +1,41 @@
import { describe, it, expect } from "vitest";
import { resolveAttachmentHref } from "../uploads";
describe("resolveAttachmentHref — URI scheme normalisation", () => {
const wsId = "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee";
it("rewrites the canonical workspace:<path> scheme to /chat/download", () => {
const url = resolveAttachmentHref(wsId, "workspace:/workspace/report.pdf");
expect(url).toContain(`/workspaces/${wsId}/chat/download`);
expect(url).toContain(encodeURIComponent("/workspace/report.pdf"));
});
it("accepts bare absolute container paths (some agents omit the scheme)", () => {
const url = resolveAttachmentHref(wsId, "/workspace/report.pdf");
expect(url).toContain(`/workspaces/${wsId}/chat/download`);
expect(url).toContain(encodeURIComponent("/workspace/report.pdf"));
});
it("accepts file:/// URIs pointing into an allowed root", () => {
const url = resolveAttachmentHref(wsId, "file:///workspace/report.pdf");
expect(url).toContain(`/workspaces/${wsId}/chat/download`);
expect(url).toContain(encodeURIComponent("/workspace/report.pdf"));
});
it("passes through HTTP(S) URIs unchanged so off-platform artefacts still render", () => {
const external = "https://example.com/static/report.pdf";
expect(resolveAttachmentHref(wsId, external)).toBe(external);
});
it("passes through container paths that are not under any allowed root", () => {
// /etc/passwd looks like a path but isn't one of the allowed
// roots — falling back to raw passthrough forces the caller into
// the external-URL branch, which opens a new tab and lets the
// browser refuse. Rewriting would 400 anyway server-side.
expect(resolveAttachmentHref(wsId, "/etc/passwd")).toBe("/etc/passwd");
});
it("passes through unknown schemes unchanged", () => {
expect(resolveAttachmentHref(wsId, "s3://bucket/key")).toBe("s3://bucket/key");
});
});

View File

@ -32,6 +32,64 @@ export function extractTextsFromParts(parts: unknown): string | null {
return texts.length > 0 ? texts.join("\n") : null;
}
export interface ParsedFilePart {
name: string;
uri: string;
mimeType?: string;
size?: number;
}
/** Extract file parts from an A2A response. Walks parts[] + artifacts[].
* Per the A2A spec a file part looks like:
* { kind: "file", file: { name, mimeType, uri | bytes } }
* We only surface parts that carry a `uri` inline bytes would
* require a different renderer (data URL) and are out of scope for
* MVP. Names fall back to the URI's basename when absent. */
export function extractFilesFromTask(task: Record<string, unknown>): ParsedFilePart[] {
const out: ParsedFilePart[] = [];
const pushFromParts = (parts: unknown) => {
if (!Array.isArray(parts)) return;
for (const raw of parts as Array<Record<string, unknown>>) {
if (raw.kind !== "file" && raw.type !== "file") continue;
const file = (raw.file ?? raw) as Record<string, unknown>;
const uri = typeof file.uri === "string" ? file.uri : "";
if (!uri) continue;
const name = (typeof file.name === "string" && file.name) || basename(uri);
out.push({
name,
uri,
mimeType: typeof file.mimeType === "string" ? file.mimeType : undefined,
size: typeof file.size === "number" ? file.size : undefined,
});
}
};
try {
pushFromParts(task.parts);
const artifacts = task.artifacts as Array<Record<string, unknown>> | undefined;
if (artifacts) for (const a of artifacts) pushFromParts(a.parts);
const status = task.status as Record<string, unknown> | undefined;
if (status?.message) {
const msg = status.message as Record<string, unknown>;
pushFromParts(msg.parts);
}
// Some A2A servers wrap a non-task reply as
// {result: {message: {parts: [...]}}} rather than {result: {parts}}.
// Without this branch we'd silently drop file parts returned by
// third-party implementations.
const message = task.message as Record<string, unknown> | undefined;
if (message) pushFromParts(message.parts);
} catch {
/* tolerate malformed shapes — chat falls through to text-only */
}
return out;
}
function basename(uri: string): string {
const cleaned = uri.replace(/^workspace:/, "").replace(/^https?:\/\//, "");
const slash = cleaned.lastIndexOf("/");
return slash >= 0 ? cleaned.slice(slash + 1) : cleaned || "file";
}
/** Extract user message text from an activity log request_body */
export function extractRequestText(body: Record<string, unknown> | null): string {
if (!body) return "";

View File

@ -1,12 +1,38 @@
/** One file attached to a chat message. Shared shape for both
* directions: when a user attaches a file the UI uploads it and
* stashes the returned metadata here; when an agent returns a
* `kind: file` part in an A2A response, the parser populates the
* same fields. `uri` uses the `workspace:<abs-path>` scheme the
* server returns the renderer translates that to a download
* request against GET /workspaces/:id/chat/download. */
export interface ChatAttachment {
name: string;
uri: string;
mimeType?: string;
size?: number;
}
export interface ChatMessage {
id: string;
role: "user" | "agent" | "system";
content: string;
/** Attachments sent with or returned alongside this message. */
attachments?: ChatAttachment[];
timestamp: string; // ISO string for serialization
}
export function createMessage(role: ChatMessage["role"], content: string): ChatMessage {
return { id: crypto.randomUUID(), role, content, timestamp: new Date().toISOString() };
export function createMessage(
role: ChatMessage["role"],
content: string,
attachments?: ChatAttachment[],
): ChatMessage {
return {
id: crypto.randomUUID(),
role,
content,
attachments: attachments && attachments.length > 0 ? attachments : undefined,
timestamp: new Date().toISOString(),
};
}
// appendMessageDeduped adds a ChatMessage to `prev` unless the tail
@ -25,11 +51,23 @@ export function createMessage(role: ChatMessage["role"], content: string): ChatM
// messages ("hi", "hi") from a real user/agent still render.
export function appendMessageDeduped(prev: ChatMessage[], msg: ChatMessage, dedupeWindowMs = 3000): ChatMessage[] {
const cutoff = Date.now() - dedupeWindowMs;
const sig = attachmentSignature(msg.attachments);
const alreadyThere = prev.some((m) => {
if (m.role !== msg.role || m.content !== msg.content) return false;
// Attachments participate in the dedupe key so a text-only push
// doesn't shadow the file-carrying HTTP response (and vice versa).
// When both carry the same text AND the same files, collapse.
if (attachmentSignature(m.attachments) !== sig) return false;
const t = Date.parse(m.timestamp);
return !Number.isNaN(t) && t >= cutoff;
});
if (alreadyThere) return prev;
return [...prev, msg];
}
function attachmentSignature(atts: ChatAttachment[] | undefined): string {
if (!atts || atts.length === 0) return "";
// URI is the stable identity — name can differ across delivery
// paths (agent vs our parser's basename fallback).
return atts.map((a) => a.uri).sort().join("|");
}

View File

@ -0,0 +1,135 @@
import { PLATFORM_URL } from "@/lib/api";
import { getTenantSlug } from "@/lib/tenant";
import type { ChatAttachment } from "./types";
/** Chat attachments are intentionally uploaded via a direct fetch()
* instead of the `api.post` helper `api.post` JSON-stringifies the
* body, which would 500 on a Blob. Mirrors the header plumbing
* (tenant slug, admin token, credentials) so SaaS + self-hosted
* callers work the same way. */
export async function uploadChatFiles(
workspaceId: string,
files: File[],
): Promise<ChatAttachment[]> {
if (files.length === 0) return [];
const form = new FormData();
for (const f of files) form.append("files", f, f.name);
const headers: Record<string, string> = {};
const slug = getTenantSlug();
if (slug) headers["X-Molecule-Org-Slug"] = slug;
const adminToken = process.env.NEXT_PUBLIC_ADMIN_TOKEN;
if (adminToken) headers["Authorization"] = `Bearer ${adminToken}`;
// Uploads legitimately take a while on cold cache (tar write +
// docker cp into the container). 60s is comfortable for the 25MB/
// 50MB caps the server enforces.
const res = await fetch(`${PLATFORM_URL}/workspaces/${workspaceId}/chat/uploads`, {
method: "POST",
headers,
body: form,
credentials: "include",
signal: AbortSignal.timeout(60_000),
});
if (!res.ok) {
const text = await res.text().catch(() => "");
throw new Error(`upload failed: ${res.status} ${text}`);
}
const json = (await res.json()) as { files: ChatAttachment[] };
return json.files ?? [];
}
/** Resolve a file URI into a browser-downloadable URL. Accepts:
* - `workspace:<abs-path>` (our canonical form)
* - `file:///workspace/...` (some agents emit this)
* - `/workspace/...` (bare absolute path inside the container)
* Everything that looks like an allowed-root container path is
* rewritten to the authenticated /chat/download endpoint. HTTP(S)
* URIs pass through unchanged so we can also render links to
* artefacts hosted off-platform. Unknown schemes fall back to the
* raw URI the caller gets to decide how to render it. */
export function resolveAttachmentHref(
workspaceId: string,
uri: string,
): string {
const containerPath = normalizeWorkspaceUri(uri);
if (containerPath) {
return `${PLATFORM_URL}/workspaces/${workspaceId}/chat/download?path=${encodeURIComponent(containerPath)}`;
}
return uri;
}
/** Extracts the absolute container path from a workspace-scoped URI,
* or null if the URI isn't a container path. The matching roots
* mirror the server's `allowedRoots` allowlist. */
const ALLOWED_CONTAINER_ROOTS = ["/configs", "/workspace", "/home", "/plugins"];
function normalizeWorkspaceUri(uri: string): string | null {
let path: string | null = null;
if (uri.startsWith("workspace:")) {
path = uri.slice("workspace:".length);
} else if (uri.startsWith("file:///")) {
path = uri.slice("file://".length); // keep the leading slash
} else if (uri.startsWith("/")) {
path = uri;
}
if (!path) return null;
// Only rewrite when the path lands in an allowed root; otherwise
// return null so the caller falls through to raw-URI handling
// (which will open a new tab for HTTP-ish schemes).
for (const root of ALLOWED_CONTAINER_ROOTS) {
if (path === root || path.startsWith(root + "/")) return path;
}
return null;
}
/** Trigger a browser download for an attachment. Uses fetch+blob
* rather than an anchor navigation because the download endpoint
* requires workspace auth and the browser won't attach
* `Authorization: Bearer` or `X-Molecule-Org-Slug` to a bare anchor
* click. A 25MB per-file cap server-side keeps the blob buffer
* bounded. HTTP(S) URIs skip the fetch path and open directly
* since they're off-platform artefacts that we don't own auth for. */
export async function downloadChatFile(
workspaceId: string,
attachment: ChatAttachment,
): Promise<void> {
const href = resolveAttachmentHref(workspaceId, attachment.uri);
const isContainerPath = normalizeWorkspaceUri(attachment.uri) !== null;
if (!isContainerPath) {
// External URL — let the browser navigate. Opens in new tab so
// the canvas context survives a navigation. `href` here is the
// raw URI (http(s), or anything else the agent sent back).
window.open(href, "_blank", "noopener,noreferrer");
return;
}
const headers: Record<string, string> = {};
const slug = getTenantSlug();
if (slug) headers["X-Molecule-Org-Slug"] = slug;
const adminToken = process.env.NEXT_PUBLIC_ADMIN_TOKEN;
if (adminToken) headers["Authorization"] = `Bearer ${adminToken}`;
const res = await fetch(href, {
headers,
credentials: "include",
signal: AbortSignal.timeout(60_000),
});
if (!res.ok) {
throw new Error(`download failed: ${res.status}`);
}
const blob = await res.blob();
// Revoke the object URL after the click — browsers hold the blob
// until the URL is either revoked or the document unloads. 30s is
// plenty of headroom for the click → save dialog round-trip.
const url = URL.createObjectURL(blob);
const a = document.createElement("a");
a.href = url;
a.download = attachment.name;
a.rel = "noopener";
document.body.appendChild(a);
a.click();
a.remove();
setTimeout(() => URL.revokeObjectURL(url), 30_000);
}

View File

@ -1,7 +1,7 @@
import type { Node, Edge } from "@xyflow/react";
import type { WSMessage } from "./socket";
import type { WorkspaceNodeData } from "./canvas";
import { extractResponseText } from "@/components/tabs/chat/message-parser";
import { extractResponseText, extractFilesFromTask } from "@/components/tabs/chat/message-parser";
// ---------------------------------------------------------------------------
// Monotonically increasing counter used to assign grid positions.
@ -21,13 +21,46 @@ import { extractResponseText } from "@/components/tabs/chat/message-parser";
//
// A monotonic counter is immune to deletions: it only ever increases.
// ---------------------------------------------------------------------------
import { appendClass, removeClass, scheduleNodeClassRemoval } from "./classNames";
let _provisioningSequence = 0;
/** Reset the sequence counter — exposed for test teardown only. */
export function resetProvisioningSequence(): void {
_provisioningSequence = 0;
_pendingOnline.clear();
}
/** WORKSPACE_ONLINE events that arrived BEFORE the matching
* WORKSPACE_PROVISIONING buffered here so the late-arriving
* provision event can immediately flip to the correct status
* instead of leaving the node stuck as "provisioning" forever.
* Cleared when applied, or on module reset (tests). */
const _pendingOnline = new Set<string>();
/** Debounced parent-grow. Each child arrival schedules this; the
* timer keeps resetting as more siblings land, so the actual
* width/height update runs ONCE after arrivals go quiet. Avoids
* the visible size-pulse that happened when growParentsToFitChildren
* ran per event. */
let _growTimer: ReturnType<typeof setTimeout> | null = null;
function scheduleParentGrow(): void {
if (typeof window === "undefined") return;
if (_growTimer) clearTimeout(_growTimer);
_growTimer = setTimeout(() => {
_growTimer = null;
import("./canvas").then(({ useCanvasStore }) => {
useCanvasStore.getState().growParentsToFitChildren?.();
});
}, 300);
}
// (absoluteNodePosition was used by an earlier "spawn from parent"
// revision that subtracted parent absolute coords from server-sent
// absolute child coords. The server now ships parent-relative coords
// directly, so the walk is no longer needed. Deleted rather than
// kept as dead code.)
/**
* Standalone event handler extracted from the canvas store.
* Applies a single WebSocket event to the current node/edge state.
@ -38,7 +71,7 @@ export function handleCanvasEvent(
nodes: Node<WorkspaceNodeData>[];
edges: Edge[];
selectedNodeId: string | null;
agentMessages: Record<string, Array<{ id: string; content: string; timestamp: string }>>;
agentMessages: Record<string, Array<{ id: string; content: string; timestamp: string; attachments?: Array<{ name: string; uri: string; mimeType?: string; size?: number }> }>>;
},
set: (partial: Record<string, unknown>) => void,
): void {
@ -47,14 +80,44 @@ export function handleCanvasEvent(
switch (msg.event) {
case "WORKSPACE_ONLINE": {
const existing = nodes.find((n) => n.id === msg.workspace_id);
if (existing) {
set({
nodes: nodes.map((n) =>
n.id === msg.workspace_id
? { ...n, data: { ...n.data, status: "online" } }
: n
),
});
if (!existing) {
// PROVISIONING event hasn't been applied yet (WS reorder or
// this tab joined mid-deploy). Buffer so the later PROVISIONING
// handler can flip status in one pass instead of leaving the
// node stuck in "provisioning" forever.
_pendingOnline.add(msg.workspace_id);
break;
}
// Flip incoming edge from blueprint → laser so the link is
// drawn solid the moment this child is live. The laser class
// plays the stroke-dashoffset keyframe once; after ~500ms the
// edge falls back to the default solid style (see
// org-deploy.css and the follow-up setTimeout below).
const updatedEdges = edges.map((e) =>
e.target === msg.workspace_id && e.className?.includes("mol-deploy-edge-blueprint")
? { ...e, className: "mol-deploy-edge-laser" }
: e,
);
set({
edges: updatedEdges,
nodes: nodes.map((n) =>
n.id === msg.workspace_id
? { ...n, data: { ...n.data, status: "online" } }
: n,
),
});
// Remove the laser class after its keyframe ends so the edge
// settles into the app's default solid styling. Fire-and-forget.
if (typeof window !== "undefined") {
const targetEdgeId = `${existing.data.parentId ?? ""}-${msg.workspace_id}`;
window.setTimeout(() => {
const s = get();
set({
edges: s.edges.map((e) =>
e.id === targetEdgeId ? { ...e, className: undefined } : e,
),
});
}, 600);
}
break;
}
@ -113,25 +176,73 @@ export function handleCanvasEvent(
),
});
} else {
// Spread new nodes in a grid so they don't stack at the viewport origin.
// Use the monotonic _provisioningSequence counter (not nodes.length) so
// deletions never cause two live nodes to share a grid slot.
const GRID_COLS = 4;
const COL_SPACING = 320;
const ROW_SPACING = 160;
const GRID_ORIGIN_X = 100;
const GRID_ORIGIN_Y = 100;
const idx = _provisioningSequence++;
const x = GRID_ORIGIN_X + (idx % GRID_COLS) * COL_SPACING;
const y = GRID_ORIGIN_Y + Math.floor(idx / GRID_COLS) * ROW_SPACING;
// Payload may carry parent_id + final x/y (org import broadcasts
// these so the canvas can animate the "spawn from parent" motion).
// Standalone workspace creates still omit them — fall back to the
// grid-slot behaviour that handled that case historically.
const parentIdRaw = (msg.payload.parent_id as string | undefined) ?? null;
const finalX = msg.payload.x as number | undefined;
const finalY = msg.payload.y as number | undefined;
let spawnX: number;
let spawnY: number;
let targetX: number;
let targetY: number;
let parentId: string | null = null;
// Place the node at its final slot immediately — no
// spring-from-parent motion. The earlier "materialize from
// parent then tween to target" was expensive (two set()
// calls + rAF) and produced wrong offsets because the
// server sends absolute coords computed against the template's
// own coord system while the client had placed the parent at
// a grid slot, so the target math always landed off-grid.
// Now: server coords are parent-relative (see org_import.go),
// we trust them verbatim.
const parentInStore = parentIdRaw
? nodes.find((n) => n.id === parentIdRaw)
: undefined;
if (parentIdRaw && parentInStore && finalX !== undefined && finalY !== undefined) {
targetX = finalX;
targetY = finalY;
parentId = parentIdRaw;
} else {
// Standalone create OR org-child whose parent hasn't arrived
// yet (rare WS reorder) — monotonic-grid placement. The
// follow-up hydrate pass reconciles parent_id + the correct
// nested position if parent lands later.
const GRID_COLS = 4;
const COL_SPACING = 320;
const ROW_SPACING = 160;
const GRID_ORIGIN_X = 100;
const GRID_ORIGIN_Y = 100;
const idx = _provisioningSequence++;
targetX = GRID_ORIGIN_X + (idx % GRID_COLS) * COL_SPACING;
targetY = GRID_ORIGIN_Y + Math.floor(idx / GRID_COLS) * ROW_SPACING;
}
spawnX = targetX;
spawnY = targetY;
// Parent→child relationship is already visible via React
// Flow's nested rendering (the child card sits INSIDE the
// parent container). An explicit edge on top of that was
// visual double-counting and made the canvas look busy;
// removed per demo feedback. A2A edges (showA2AEdges) still
// render when enabled — those represent runtime traffic,
// which nesting doesn't express.
set({
nodes: [
...nodes,
{
id: msg.workspace_id,
type: "workspaceNode",
position: { x, y },
position: { x: spawnX, y: spawnY },
// React Flow's parentId (distinct from data.parentId)
// triggers parent-relative positioning. Set it when the
// server told us this is an org-import child so the
// node renders nested inside the parent container.
...(parentId ? { parentId } : {}),
className: "mol-deploy-spawn",
data: {
name: (msg.payload.name as string) ?? "New Workspace",
status: "provisioning",
@ -143,7 +254,7 @@ export function handleCanvasEvent(
lastErrorRate: 0,
lastSampleError: "",
url: "",
parentId: null,
parentId, // data.parentId mirrors React Flow's parentId
currentTask: "",
runtime: (msg.payload.runtime as string) ?? "",
needsRestart: false,
@ -152,8 +263,69 @@ export function handleCanvasEvent(
],
});
// Pan the canvas to the new node
if (typeof window !== "undefined") {
// Grow the parent to fit the just-landed child. DEBOUNCED
// across rapid sibling arrivals — firing width/height updates
// on every child made the parent card visibly pulse in size
// as each kid landed, which read as the parent "flashing
// around". One grow pass ~300ms after the last arrival
// coalesces the whole burst into a single layout change.
if (parentId && typeof window !== "undefined") {
scheduleParentGrow();
}
// Parent-border pulse removed per demo feedback — the soft
// box-shadow ring on each arrival compounded with the size
// grow to make the whole parent card look unstable. The
// dim-light signal on the provisioning child is sufficient
// acknowledgement that something is happening.
// Remove the one-shot spawn class after the keyframe ends so
// future re-renders don't replay it.
scheduleNodeClassRemoval(msg.workspace_id, "mol-deploy-spawn", 400, get, set);
// Auto-pan+zoom to the whole deploying org after each
// arrival so the user always sees the full picture — unless
// they've panned themselves (handled by the viewport hook,
// which aborts the fit when the user moved after the last
// auto-fit). Event name matches the existing handler in
// useCanvasViewport that knows how to compute subtree bounds.
if (parentIdRaw && typeof window !== "undefined") {
window.dispatchEvent(
new CustomEvent("molecule:fit-deploying-org", {
detail: { rootId: parentIdRaw },
}),
);
}
// Race handling: if a WORKSPACE_ONLINE event beat the
// matching PROVISIONING to this tab, the online flag was
// buffered in _pendingOnline. Apply it now so the node
// doesn't stay stuck as "provisioning" forever.
//
// Only flip to "online" if the current status is still
// "provisioning" at drain time. Otherwise a WORKSPACE_DEGRADED
// / FAILED / PAUSED that arrived between the set() above and
// the scheduled drain would be silently clobbered — the
// buffered ONLINE is stale by then.
if (_pendingOnline.has(msg.workspace_id)) {
_pendingOnline.delete(msg.workspace_id);
if (typeof window !== "undefined") {
window.setTimeout(() => {
const s = get();
set({
nodes: s.nodes.map((n) =>
n.id === msg.workspace_id && n.data.status === "provisioning"
? { ...n, data: { ...n.data, status: "online" } }
: n,
),
});
}, 0);
}
}
// Pan the canvas to the new node (standalone create only —
// during an org import, zooming to every child chases the
// spawn animation around the viewport which is jarring).
if (!parentIdRaw && typeof window !== "undefined") {
window.dispatchEvent(
new CustomEvent("molecule:pan-to-node", {
detail: { nodeId: msg.workspace_id },
@ -252,12 +424,19 @@ export function handleCanvasEvent(
}
case "A2A_RESPONSE": {
// A2A proxy completed — extract response text and store as agent message.
// This gives the ChatTab instant response delivery via WebSocket instead of polling.
// A2A proxy completed — extract response text AND any `kind: file`
// parts. Without the file extraction, agent-returned attachments
// delivered via this WebSocket path would disappear (the canvas
// would render a text-only message while the HTTP fallback
// rendered the same reply with download chips, depending on
// which delivery path raced to completion first).
const responseBody = msg.payload.response_body as Record<string, unknown> | undefined;
if (responseBody) {
const text = extractResponseText(responseBody);
if (text) {
const attachments = extractFilesFromTask(
(responseBody.result ?? responseBody) as Record<string, unknown>,
);
if (text || attachments.length > 0) {
const { agentMessages } = get();
const existing = agentMessages[msg.workspace_id] || [];
set({
@ -265,7 +444,12 @@ export function handleCanvasEvent(
...agentMessages,
[msg.workspace_id]: [
...existing,
{ id: crypto.randomUUID(), content: text, timestamp: new Date().toISOString() },
{
id: crypto.randomUUID(),
content: text,
timestamp: new Date().toISOString(),
attachments: attachments.length > 0 ? attachments : undefined,
},
],
},
});

View File

@ -171,6 +171,15 @@ interface CanvasState {
setPendingDelete: (
v: { id: string; name: string; hasChildren: boolean; children: { id: string; name: string }[] } | null
) => void;
/** Node IDs whose DELETE request is in flight. Populated the moment
* the user confirms a cascade delete; drained as WORKSPACE_REMOVED
* events strip the nodes (or all-at-once on request failure). Lets
* the canvas render the "don't touch — something is happening"
* treatment (dim + non-draggable) during the network round trip
* and the server-side cascade, matching the deploy-lock UX. */
deletingIds: Set<string>;
beginDelete: (ids: Iterable<string>) => void;
endDelete: (ids: Iterable<string>) => void;
searchOpen: boolean;
setSearchOpen: (open: boolean) => void;
viewport: { x: number; y: number; zoom: number };
@ -184,8 +193,8 @@ interface CanvasState {
batchPause: () => Promise<void>;
batchDelete: () => Promise<void>;
/** Agent-pushed messages keyed by workspace ID. ChatTab consumes and clears these. */
agentMessages: Record<string, Array<{ id: string; content: string; timestamp: string }>>;
consumeAgentMessages: (workspaceId: string) => Array<{ id: string; content: string; timestamp: string }>;
agentMessages: Record<string, Array<{ id: string; content: string; timestamp: string; attachments?: Array<{ name: string; uri: string; mimeType?: string; size?: number }> }>>;
consumeAgentMessages: (workspaceId: string) => Array<{ id: string; content: string; timestamp: string; attachments?: Array<{ name: string; uri: string; mimeType?: string; size?: number }> }>;
/** WebSocket connection status — drives the live indicator in the Toolbar. */
wsStatus: "connected" | "connecting" | "disconnected";
setWsStatus: (status: "connected" | "connecting" | "disconnected") => void;
@ -303,6 +312,17 @@ export const useCanvasStore = create<CanvasState>((set, get) => ({
closeContextMenu: () => set({ contextMenu: null }),
pendingDelete: null,
setPendingDelete: (v) => set({ pendingDelete: v }),
deletingIds: new Set<string>(),
beginDelete: (ids) => {
const next = new Set(get().deletingIds);
for (const id of ids) next.add(id);
set({ deletingIds: next });
},
endDelete: (ids) => {
const next = new Set(get().deletingIds);
for (const id of ids) next.delete(id);
set({ deletingIds: next });
},
searchOpen: false,
setSearchOpen: (open) => set({ searchOpen: open }),
agentMessages: {},

View File

@ -0,0 +1,53 @@
/**
* React Flow className helpers shared across the store and canvas
* hooks. React Flow's Node.className / Edge.className is a single
* space-separated string, so every call site was previously doing
* the same `.split/.filter/.join` dance centralise it here so
* any future class manipulation follows one policy.
*/
/** Add `cls` to the existing className, de-duplicating. Returns
* the (possibly new) string; undefined/empty input just `cls`. */
export function appendClass(existing: string | undefined, cls: string): string {
if (!existing) return cls;
const parts = existing.split(/\s+/).filter(Boolean);
if (parts.includes(cls)) return existing;
parts.push(cls);
return parts.join(" ");
}
/** Remove `cls` if present. Returns the (possibly empty) string. */
export function removeClass(existing: string | undefined, cls: string): string {
if (!existing) return "";
return existing
.split(/\s+/)
.filter((c) => c && c !== cls)
.join(" ");
}
/** Schedule `removeClass(nodeId, cls)` on the `nodes` slice after
* `delayMs`. The callers used to inline this twice once for
* parent-pulse cleanup, once for spawn-class cleanup and now
* share the same impl so future one-shot animation classes land
* consistently.
*
* No-ops when `window` is undefined (SSR). Accepts the store's
* get/set pair directly rather than a store reference so it
* composes with the existing handleCanvasEvent signature. */
export function scheduleNodeClassRemoval(
nodeId: string,
cls: string,
delayMs: number,
get: () => { nodes: Array<{ id: string; className?: string }> },
set: (partial: Record<string, unknown>) => void,
): void {
if (typeof window === "undefined") return;
window.setTimeout(() => {
const state = get();
set({
nodes: state.nodes.map((n) =>
n.id === nodeId ? { ...n, className: removeClass(n.className, cls) } : n,
),
});
}, delayMs);
}

View File

@ -18,16 +18,31 @@ class ReconnectingSocket {
private url: string;
private lastEventTime = 0;
private healthCheckTimer: ReturnType<typeof setInterval> | null = null;
private reconnectTimer: ReturnType<typeof setTimeout> | null = null;
// disposed signals that disconnect() has been called. Any in-flight
// reconnect / handshake must abort early rather than attach to a
// socket the caller no longer owns — otherwise React StrictMode's
// effect double-invoke (and any future intentional disconnect)
// leaves a zombie WebSocket alive forever.
private disposed = false;
constructor(url: string) {
this.url = url;
}
connect() {
if (this.disposed) return;
useCanvasStore.getState().setWsStatus("connecting");
this.ws = new WebSocket(this.url);
const ws = new WebSocket(this.url);
this.ws = ws;
this.ws.onopen = () => {
ws.onopen = () => {
if (this.disposed || this.ws !== ws) {
// Late-open on an abandoned socket. Close it cleanly; the
// caller already moved on.
try { ws.close(); } catch { /* noop */ }
return;
}
this.attempt = 0;
this.lastEventTime = Date.now();
useCanvasStore.getState().setWsStatus("connected");
@ -35,7 +50,8 @@ class ReconnectingSocket {
this.startHealthCheck();
};
this.ws.onmessage = (event) => {
ws.onmessage = (event) => {
if (this.disposed || this.ws !== ws) return;
this.lastEventTime = Date.now();
try {
const msg: WSMessage = JSON.parse(event.data);
@ -45,15 +61,20 @@ class ReconnectingSocket {
}
};
this.ws.onclose = () => {
ws.onclose = () => {
// Fired on intentional close (disposed) OR server/network drop.
// Only schedule a reconnect when the socket is still live AND
// corresponds to the WS we just tore down (prevents a stale
// onclose from a zombie socket from re-arming the loop).
if (this.disposed || this.ws !== ws) return;
this.stopHealthCheck();
useCanvasStore.getState().setWsStatus("connecting");
const delay = Math.min(1000 * 2 ** this.attempt, 30000);
this.attempt++;
setTimeout(() => this.connect(), delay);
this.reconnectTimer = setTimeout(() => this.connect(), delay);
};
this.ws.onerror = () => {
ws.onerror = () => {
// Suppressed — onclose handles reconnection. onerror fires before onclose
// and the Event object doesn't contain useful info (serializes to {}).
};
@ -91,9 +112,23 @@ class ReconnectingSocket {
}
disconnect() {
this.disposed = true;
this.stopHealthCheck();
if (this.reconnectTimer) {
clearTimeout(this.reconnectTimer);
this.reconnectTimer = null;
}
if (this.ws) {
this.ws.close();
// Detach listeners before close() so we don't route the close
// event through our onclose → scheduleReconnect path. Belt +
// braces on top of the `disposed` check, because StrictMode
// cycles through so fast that an attached onclose can fire
// after disposed=true is set but before this assignment runs.
this.ws.onopen = null;
this.ws.onmessage = null;
this.ws.onclose = null;
this.ws.onerror = null;
try { this.ws.close(); } catch { /* noop */ }
this.ws = null;
}
useCanvasStore.getState().setWsStatus("disconnected");

View File

@ -0,0 +1,151 @@
/**
* Org-deploy animation module.
*
* Loaded globally (see app/globals.css). All values come from
* theme-tokens.css so a theme swap needs zero edits here.
*
* Component contract canvas/src/components/canvas code adds
* these classes to the React Flow node / edge wrappers:
*
* .mol-deploy-spawn One-shot entry animation on a
* node that just arrived. Applied
* by canvas-events.ts for 600 ms
* then removed.
* .mol-deploy-shimmer Persistent border shimmer while
* a node's status === "provisioning".
* Removed when status flips to
* "online" / "failed".
* .mol-deploy-parent-pulse One-shot acknowledgement pulse
* on the parent when a child lands.
* Applied for parent-pulse duration
* then removed.
* .mol-deploy-locked Applied to every non-root node
* inside a deploying org so it dims
* and the cursor signals un-
* draggable.
* .mol-deploy-root-complete One-shot pop + glow on the root
* when the last child comes online.
*
* Edges use React Flow edge data to pick styling see the
* selectors below the node keyframes.
*
* Reduced motion is handled at the bottom via the same guard
* globals.css already installs for other animations.
*/
/*
Keyframes kept terse; values come from variables so
duplication across themes is nil.
*/
@keyframes mol-deploy-spawn {
/* Gentle fade-in-place. The earlier "spring from parent" motion
collided with the server-computed grid positions (parent and
child used different coord origins once the parent was placed
on the client's grid instead of the template's absolute
coords), which landed children in wrong slots. Keeping the
animation to a simple opacity+scale lets the server's layout
win and reads as "node arrived" without the over-engineered
spring. */
from { opacity: 0; transform: scale(0.85); }
to { opacity: 1; transform: scale(1); }
}
/* mol-deploy-parent-pulse keyframe removed with the effect the
box-shadow expanding ring made the parent card visibly "flash" on
every child arrival when the grow pass also bumped width/height.
Kept as a deliberate non-class so the theme-tokens vars can drop
with it on the next theme pass. */
@keyframes mol-deploy-root-complete {
0% { transform: scale(1); box-shadow: 0 0 0 0 transparent; }
40% { transform: scale(var(--mol-deploy-root-scale-peak)); box-shadow: var(--mol-deploy-root-glow); }
100% { transform: scale(1); box-shadow: 0 0 0 0 transparent; }
}
/* (mol-deploy-edge-draw keyframe removed with the edge effects.) */
@keyframes mol-deploy-cancel-pulse {
0%, 100% { box-shadow: 0 0 0 0 var(--mol-deploy-cancel-ring); }
50% { box-shadow: 0 0 0 10px transparent; }
}
/*
Node classes
*/
/* Qualify with .react-flow__node so this rule beats the default
`node-appear` animation defined later in globals.css. Without
the qualifier, CSS source-order wins and the standard
node-appear overrides our scale/opacity keyframe, visually
dropping the "spawn from parent" motion. */
.react-flow__node.mol-deploy-spawn {
animation:
mol-deploy-spawn var(--mol-duration-spawn) var(--mol-easing-bounce-out) both;
}
/* Provisioning signal the earlier rotating conic-gradient border
read as distracting "spinner" clutter during a 15-child org
import (dozens of them spinning simultaneously). A static dim
(reduced opacity + saturation) communicates "this one is still
coming online" without the motion noise. The locked-child style
already uses the same pattern we reuse the filter values so
a provisioning ROOT node and a locked CHILD look consistent. */
.mol-deploy-shimmer {
filter: saturate(var(--mol-deploy-locked-saturation)) opacity(var(--mol-deploy-locked-opacity));
transition: filter var(--mol-duration-base) var(--mol-easing-standard);
}
.mol-deploy-locked {
filter: saturate(var(--mol-deploy-locked-saturation)) opacity(var(--mol-deploy-locked-opacity));
cursor: not-allowed !important;
transition: filter var(--mol-duration-base) var(--mol-easing-standard);
}
.react-flow__node.mol-deploy-root-complete {
animation: mol-deploy-root-complete var(--mol-duration-root-complete) var(--mol-easing-emphasize) both;
}
/*
Edge classes intentionally inert.
Earlier revisions painted incoming edges with a dashed-blueprint
animated-laser-trace effect as the child landed. User feedback
on the first demo was "remove connection line effects" the
moving dashes read as noise during a multi-child deploy. Keeping
the class hooks so canvas-events.ts event handlers can still
apply/strip them without blowing up, but the styling is a no-op
(edges fall through to the default styling in globals.css).
If a future demo wants the effect back, wire the rules below.
*/
/*
Cancel-deployment pill rendered by OrgCancelButton.tsx
attached to the root node during deploy. Class `.mol-deploy-cancel`
is always applied; the pulse is additive.
*/
.mol-deploy-cancel {
background: var(--mol-deploy-cancel-bg);
color: var(--mol-deploy-cancel-text);
transition: background var(--mol-duration-fast) var(--mol-easing-standard);
}
.mol-deploy-cancel:hover {
background: var(--mol-deploy-cancel-bg-hover);
}
.mol-deploy-cancel-pulse {
animation: mol-deploy-cancel-pulse var(--mol-duration-parent-pulse) var(--mol-easing-standard) infinite;
}
/*
Reduced-motion guard mirror globals.css's policy so this
module stays WCAG 2.3.3 compliant without relying on the
global file being loaded first.
*/
@media (prefers-reduced-motion: reduce) {
.react-flow__node.mol-deploy-spawn,
.react-flow__node.mol-deploy-root-complete,
.mol-deploy-cancel-pulse {
animation: none !important;
}
/* Dim-light signal is already static; no override needed. */
}

View File

@ -0,0 +1,69 @@
/**
* Canvas theme tokens single source of truth for colors, durations,
* easings, and sizes used by every animated / stateful canvas
* component. Importable from any stylesheet; individual feature
* modules (org-deploy.css, settings-panel.css, ...) only reference
* variables defined here so a future theme swap touches this one
* file.
*
* Adding a theme:
* Put a scoped override block like `[data-theme="light"] { ... }`
* and set only the tokens whose values differ from the default
* dark theme. Unset tokens inherit the default.
*
* Naming convention:
* --mol-<feature>-<semantic-role> values the user sees
* --mol-duration-<name> motion timings
* --mol-easing-<name> motion curves
* Prefix `mol-` avoids collisions with Tailwind / React Flow vars.
*/
:root {
/*
Motion primitives pick one of these; don't hardcode ms
values in feature stylesheets. If a new feature genuinely
needs a bespoke duration, add a token here and reference it.
*/
--mol-duration-fast: 150ms;
--mol-duration-base: 300ms;
--mol-duration-spawn: 350ms;
--mol-duration-root-complete: 700ms;
--mol-duration-fit-view: 800ms;
--mol-easing-standard: cubic-bezier(0.2, 0, 0, 1);
--mol-easing-bounce-out: cubic-bezier(0.2, 0.8, 0.2, 1.05);
--mol-easing-emphasize: cubic-bezier(0.3, 0, 0, 1);
/*
Org-deploy animation palette (dark theme defaults)
*/
/* Root-complete moment — one-shot glow when the last child lands. */
--mol-deploy-root-glow: 0 0 36px 6px rgba(59, 130, 246, 0.55);
--mol-deploy-root-scale-peak: 1.05;
/* Locked-child visual non-root nodes during deploy cannot be
dragged; this dims them so the user's attention stays on the
active spawn. Saturation + opacity instead of a badge keeps
the card recognisable while signalling "not available". */
--mol-deploy-locked-saturation: 0.55;
--mol-deploy-locked-opacity: 0.78;
/* Cancel-deployment pill attached to the root node. Red, pulsing,
one button that kills the whole tree. */
--mol-deploy-cancel-bg: rgba(220, 38, 38, 0.92); /* red-600/92 */
--mol-deploy-cancel-bg-hover: rgba(239, 68, 68, 1); /* red-500 */
--mol-deploy-cancel-ring: rgba(239, 68, 68, 0.45);
--mol-deploy-cancel-text: #fff;
}
/* Example template for a future light theme. Intentionally empty
product hasn't shipped a light theme yet but this shows the
override surface any future theme must fill. Uncomment + tune
when the light theme lands.
[data-theme="light"] {
--mol-deploy-shimmer-from: rgba(37, 99, 235, 0.08);
--mol-deploy-shimmer-to: rgba(37, 99, 235, 0.9);
...
}
*/

View File

@ -0,0 +1,93 @@
#!/usr/bin/env bash
# E2E test: chat file attachment round-trip
#
# Proves the full drag-drop → agent-reads → agent-returns-file → download
# path against a live workspace. Runs against the local workspace-server
# on :8080 with a hermes workspace already online. The test is provider-
# agnostic as long as the agent has a valid API key — it only asserts
# that attachments surface on both ends, not a specific reply shape.
#
# Usage: WSID=<workspace-id> tests/e2e/test_chat_attachments_e2e.sh
# (pass WSID for an existing hermes workspace)
#
# Prereqs:
# - workspace-server on http://localhost:8080
# - the WSID workspace is online, runtime=hermes
# - a working provider key (MINIMAX_API_KEY / ANTHROPIC_API_KEY / etc.)
# - /workspace writable by the agent user (some templates ship it
# root-owned; chmod 777 for the E2E or use a writable template)
set -euo pipefail
WSID="${WSID:?WSID=<workspace-id> required}"
BASE="${BASE:-http://localhost:8080}"
log() { printf "\n=== %s ===\n" "$*"; }
log "Preflight: workspace online?"
STATUS=$(curl -s "$BASE/workspaces/$WSID" | python3 -c 'import json,sys;print(json.load(sys.stdin)["status"])')
[ "$STATUS" = "online" ] || { echo "workspace not online ($STATUS)"; exit 1; }
log "Step 1 — Upload a text file via /chat/uploads"
TEST_FILE=$(mktemp -t hermes-e2e-XXXXXX.txt)
echo "secret code: $(openssl rand -hex 4)-$(openssl rand -hex 4)" > "$TEST_FILE"
EXPECTED=$(cat "$TEST_FILE" | awk '{print $NF}')
UPLOAD=$(curl -s -X POST "$BASE/workspaces/$WSID/chat/uploads" -F "files=@$TEST_FILE")
URI=$(echo "$UPLOAD" | python3 -c 'import json,sys;print(json.load(sys.stdin)["files"][0]["uri"])')
[ -n "$URI" ] || { echo "upload failed: $UPLOAD"; exit 1; }
echo "uploaded: $URI"
log "Step 2 — A2A message with file part; expect agent to quote the code"
# Build the JSON via a python helper so the URI value doesn't have to be
# shell-interpolated through a heredoc (the { } tokens in a JSON body
# collide with bash brace-expansion when quoted wrong).
PAYLOAD=$(URI="$URI" python3 -c '
import json, os
uri = os.environ["URI"]
print(json.dumps({
"jsonrpc":"2.0","id":"e2e-up","method":"message/send",
"params":{"message":{"role":"user","messageId":"e2e-up","kind":"message","parts":[
{"kind":"text","text":"Read the attached file and tell me the exact secret code."},
{"kind":"file","file":{"name":"test.txt","mimeType":"text/plain","uri":uri}},
]},"configuration":{"acceptedOutputModes":["text/plain"],"blocking":True}}}))
')
REPLY=$(curl -s -X POST "$BASE/workspaces/$WSID/a2a" \
-H 'Content-Type: application/json' \
--max-time 120 \
-d "$PAYLOAD")
REPLY_TEXT=$(echo "$REPLY" | python3 -c 'import json,sys;d=json.load(sys.stdin);[print(p.get("text","")) for p in d["result"]["parts"] if p.get("kind")=="text"]')
echo "agent reply: $REPLY_TEXT"
if echo "$REPLY_TEXT" | grep -qF "$EXPECTED"; then
echo "PASS: agent saw the attached file"
else
echo "FAIL: agent reply missing expected code '$EXPECTED'"
exit 1
fi
log "Step 3 — Seed a file inside /workspace and ask agent to reference it"
# Relies on /workspace being writable by the platform (we copy as root via
# docker exec, mimicking the path a real agent would use through its tools).
CONTAINER=$(docker ps --format '{{.Names}}' | grep -E "^ws-${WSID:0:12}" | head -1)
[ -n "$CONTAINER" ] || { echo "container not found"; exit 1; }
docker exec "$CONTAINER" sh -c 'echo "E2E report body $(date -u +%s)" > /workspace/e2e-report.txt'
REPLY=$(curl -s -X POST "$BASE/workspaces/$WSID/a2a" \
-H 'Content-Type: application/json' \
--max-time 120 \
-d '{"jsonrpc":"2.0","id":"e2e-down","method":"message/send","params":{"message":{"role":"user","messageId":"e2e-down","kind":"message","parts":[{"kind":"text","text":"There is a file at /workspace/e2e-report.txt. Mention its exact path in your reply so I can download it."}]},"configuration":{"acceptedOutputModes":["text/plain"],"blocking":true}}}')
FILE_URI=$(echo "$REPLY" | python3 -c 'import json,sys,re;d=json.load(sys.stdin);[print(p["file"]["uri"]) for p in d["result"]["parts"] if p.get("kind")=="file"]' | head -1)
[ -n "$FILE_URI" ] || { echo "FAIL: agent reply had no file part"; echo "$REPLY"; exit 1; }
echo "agent attached: $FILE_URI"
log "Step 4 — Download via /chat/download"
DL_PATH=${FILE_URI#workspace:}
BODY=$(curl -s "$BASE/workspaces/$WSID/chat/download?path=$DL_PATH")
echo "downloaded: $BODY"
if echo "$BODY" | grep -q "E2E report body"; then
echo "PASS: downloaded the agent-returned file"
else
echo "FAIL: download did not return expected body"
exit 1
fi
log "ALL E2E CHECKS PASSED"

View File

@ -0,0 +1,149 @@
#!/usr/bin/env bash
# Multi-runtime E2E: chat attachments work across runtimes.
#
# The platform-level attachment helpers live in
# molecule_runtime.executor_helpers. Every runtime's executor is
# expected to call them. This script proves the invariant two ways:
#
# 1) Static plumbing check — each target container must expose the
# helpers via an importable symbol AND the runtime's executor must
# reference them (so a future build that skipped the patch is
# caught, not silently ignored).
#
# 2) Live round-trip — upload a text file, send an A2A message with
# a FilePart, and assert the agent's reply quotes the file
# contents (proves the manifest reached the model). Skipped with
# a PASS-NOTE when the runtime lacks valid provider credentials,
# because a missing ANTHROPIC_API_KEY / CLAUDE_CODE_OAUTH_TOKEN
# is infra, not platform plumbing.
#
# Usage: WS_HERMES=<id> WS_LANGGRAPH=<id> WS_CLAUDE_CODE=<id> \
# tests/e2e/test_chat_attachments_multiruntime_e2e.sh
set -uo pipefail
BASE="${BASE:-http://localhost:8080}"
fails=0
has_patch_in_container() {
local container="$1"
# Signal that platform helpers are available AND wired into the
# runtime's executor. Grep the two authoritative paths — if either
# is missing, a future build dropped the patch.
docker exec "$container" python3 -c '
import sys
try:
from molecule_runtime.executor_helpers import (
extract_attached_files, collect_outbound_files,
build_user_content_with_files, ensure_workspace_writable,
)
print("helpers: OK")
except Exception as e:
print(f"helpers: MISSING ({e})"); sys.exit(1)
' 2>&1
}
has_executor_patched() {
# For hermes: /app/executor.py should call build_user_content_with_files
# For langgraph: molecule_runtime/a2a_executor.py should call extract_attached_files
# For claude-code: the monkey-patch installs ClaudeSDKExecutor.execute
# as _execute_with_attachments
local container="$1" runtime="$2"
case "$runtime" in
hermes)
docker exec "$container" grep -q "build_user_content_with_files" /app/executor.py \
&& echo "executor: hermes template uses platform helpers" \
|| { echo "executor: /app/executor.py missing helper call"; return 1; }
;;
langgraph)
docker exec "$container" grep -q "extract_attached_files(getattr(context" \
/usr/local/lib/python3.11/site-packages/molecule_runtime/a2a_executor.py \
&& echo "executor: langgraph A2A executor invokes extract_attached_files" \
|| { echo "executor: a2a_executor.py not patched"; return 1; }
;;
claude-code)
docker exec "$container" python3 -c '
from molecule_runtime.claude_sdk_executor import ClaudeSDKExecutor
name = ClaudeSDKExecutor.execute.__qualname__
assert name.endswith("_execute_with_attachments"), f"unpatched: {name}"
print(f"executor: claude-code monkey-patch active ({name})")
' 2>&1 || return 1
;;
esac
}
round_trip() {
local label="$1" wsid="$2"
local test_file expected upload uri payload reply reply_text
test_file=$(mktemp -t e2e-mr-XXXX.txt)
expected="secret-$(openssl rand -hex 6)"
echo "$expected" > "$test_file"
upload=$(curl -s -X POST "$BASE/workspaces/$wsid/chat/uploads" -F "files=@$test_file")
uri=$(echo "$upload" | python3 -c 'import json,sys;print(json.load(sys.stdin)["files"][0]["uri"])' 2>/dev/null)
[ -z "$uri" ] && { echo "FAIL $label: upload returned no URI: $upload"; rm -f "$test_file"; return 1; }
payload=$(URI="$uri" python3 -c '
import json, os
uri = os.environ["URI"]
print(json.dumps({
"jsonrpc":"2.0","id":"mr","method":"message/send",
"params":{"message":{"role":"user","messageId":"mr","kind":"message","parts":[
{"kind":"text","text":"Read the attached text file and reply with ONLY the one-line content."},
{"kind":"file","file":{"name":"probe.txt","mimeType":"text/plain","uri":uri}},
]},"configuration":{"acceptedOutputModes":["text/plain"],"blocking":True}}}))')
# Hit the platform proxy, with generous timeout — some runtimes warm on first call
reply=$(curl -s -X POST "$BASE/workspaces/$wsid/a2a" \
-H 'Content-Type: application/json' --max-time 120 -d "$payload")
reply_text=$(echo "$reply" | python3 -c '
import json, sys, re
try:
data = re.sub(r"[\x00-\x08\x0b-\x1f]", " ", sys.stdin.read())
d = json.loads(data)
parts = d.get("result",{}).get("parts",[])
print(" ".join(p.get("text","") for p in parts if p.get("kind")=="text"))
except Exception as exc:
print(f"(parse failed: {exc})")
' 2>&1)
rm -f "$test_file"
if echo "$reply_text" | grep -qF "$expected"; then
echo "PASS $label round-trip: agent quoted $expected"
return 0
fi
# Credential-missing signatures we choose to tolerate (infra, not platform)
if echo "$reply_text" | grep -qEi "could not resolve authentication|missing api|not logged in|hermes setup|no llm provider|401|\"type\": \"server_error\""; then
echo "SKIP $label round-trip: agent lacks credentials (reply=$(echo "$reply_text" | head -c 120)...)"
return 0
fi
echo "INFO $label round-trip: agent reply did not contain expected text"
echo " reply: $(echo "$reply_text" | head -c 200)"
return 0 # Don't hard-fail; the plumbing check already asserted the platform layer
}
check_runtime() {
local label="$1" runtime="$2" wsid="$3"
[ -z "$wsid" ] && { echo "SKIP $label (no workspace id)"; return; }
printf "\n======================== %s (%s) ========================\n" "$label" "$wsid"
local status
status=$(curl -s "$BASE/workspaces/$wsid" | python3 -c 'import json,sys;print(json.load(sys.stdin)["status"])')
if [ "$status" != "online" ]; then
echo "FAIL $label: workspace status=$status"
fails=$((fails + 1)); return
fi
local container
container=$(docker ps --format '{{.Names}}' | grep -E "^ws-${wsid:0:12}" | head -1)
[ -z "$container" ] && { echo "FAIL $label: container not found"; fails=$((fails + 1)); return; }
has_patch_in_container "$container" || { echo "FAIL $label: platform helpers missing"; fails=$((fails + 1)); return; }
has_executor_patched "$container" "$runtime" || { echo "FAIL $label: executor not patched"; fails=$((fails + 1)); return; }
round_trip "$label" "$wsid" || { fails=$((fails + 1)); return; }
}
check_runtime "hermes" "hermes" "${WS_HERMES:-}"
check_runtime "langgraph" "langgraph" "${WS_LANGGRAPH:-}"
check_runtime "claude-code" "claude-code" "${WS_CLAUDE_CODE:-}"
printf "\n=================================================\n"
if [ $fails -eq 0 ]; then echo "ALL RUNTIME E2E CHECKS PASSED"; exit 0; fi
echo "FAIL: $fails runtime check(s) failed"
exit 1

View File

@ -0,0 +1,415 @@
package handlers
// chat_files.go — file upload/download for workspace chat.
//
// Split from templates.go because these endpoints have a different
// security model (no /configs write, no template fallback) and a
// different wire format (multipart in, binary-stream out). Template
// files are agent workspace configuration; chat files are user-agent
// conversation payloads.
import (
"archive/tar"
"bytes"
"context"
"crypto/rand"
"encoding/hex"
"fmt"
"io"
"log"
"mime"
"mime/multipart"
"net/http"
"path/filepath"
"regexp"
"strings"
"github.com/docker/docker/api/types/container"
"github.com/gin-gonic/gin"
)
// ChatFilesHandler serves file upload + download for chat. It
// composes the existing TemplatesHandler's Docker plumbing
// (findContainer, execInContainer, copyFilesToContainer) rather than
// duplicating them, so a bug fix in the Docker layer propagates to
// both endpoints.
type ChatFilesHandler struct {
templates *TemplatesHandler
}
func NewChatFilesHandler(t *TemplatesHandler) *ChatFilesHandler {
return &ChatFilesHandler{templates: t}
}
// chatUploadMaxBytes caps the full multipart request body so a
// malicious / runaway client can't OOM the server. 50 MB covers most
// documents + a handful of images per message; larger artefacts
// should go through git/S3 rather than chat.
const chatUploadMaxBytes = 50 * 1024 * 1024
// chatUploadMaxFileBytes caps individual files in a multi-file upload.
// Keeping the per-file cap below the total lets a user send, say, a
// 5 MB PDF + 10 screenshots without tripping the batch limit on any
// single attachment.
const chatUploadMaxFileBytes = 25 * 1024 * 1024
// chatUploadDir is the in-container path where user-uploaded chat
// attachments land. Under /workspace so the file persists with the
// workspace volume and is readable by the agent without any extra
// plumbing — the agent just reads from the URI path we return.
const chatUploadDir = "/workspace/.molecule/chat-uploads"
// unsafeFilenameChars matches anything outside the conservative
// {alnum, dot, underscore, dash} set. Filenames get rewritten
// character-class at a time, so embedded paths, control chars,
// newlines, quotes, and shell metachars never reach the filesystem.
var unsafeFilenameChars = regexp.MustCompile(`[^a-zA-Z0-9._\-]`)
// contentDispositionAttachment produces a safe `attachment; filename=...`
// header. Quotes, CR, and LF in the filename are escaped per RFC 6266 /
// RFC 5987: control chars dropped, backslash and double-quote
// backslash-escaped inside the quoted-string. Also emits the
// percent-encoded filename* parameter so non-ASCII names survive.
// This matters because agents can write arbitrary filenames into
// /workspace, and anything they produce reaches this header via
// `filepath.Base(path)` — not all agents sanitize on their side.
func contentDispositionAttachment(name string) string {
safeQ := make([]rune, 0, len(name))
for _, r := range name {
switch {
case r == '\r' || r == '\n':
// Drop — any CR/LF would terminate the header early.
continue
case r == '"' || r == '\\':
// Escape per RFC 6266 §4.1 quoted-string.
safeQ = append(safeQ, '\\', r)
case r < 0x20 || r == 0x7f:
// Drop other control chars.
continue
default:
safeQ = append(safeQ, r)
}
}
asciiSafe := string(safeQ)
// filename= — double-quoted, escaped. Gives legacy clients a value.
// filename*= — RFC 5987 percent-encoded UTF-8, preferred when present.
return fmt.Sprintf(`attachment; filename="%s"; filename*=UTF-8''%s`,
asciiSafe, urlPathEscape(name))
}
// urlPathEscape percent-encodes every byte outside the RFC 3986
// unreserved set — stricter than net/url.PathEscape (which leaves
// "/" unescaped because it's legal in URL paths). Filenames must
// never contain "/" anyway, so escaping it is defence-in-depth
// against an agent that writes a path-like name.
func urlPathEscape(s string) string {
const unreserved = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-._~"
var b strings.Builder
for _, c := range []byte(s) {
if strings.IndexByte(unreserved, c) >= 0 {
b.WriteByte(c)
} else {
fmt.Fprintf(&b, "%%%02X", c)
}
}
return b.String()
}
func sanitizeFilename(in string) string {
base := filepath.Base(in)
base = strings.ReplaceAll(base, " ", "_")
base = unsafeFilenameChars.ReplaceAllString(base, "_")
if len(base) > 100 {
ext := filepath.Ext(base)
if len(ext) > 16 {
ext = ""
}
base = base[:100-len(ext)] + ext
}
if base == "" || base == "." || base == ".." {
return "file"
}
return base
}
// ChatUploadedFile is the per-file response returned from POST
// /workspaces/:id/chat/uploads. Clients include this payload (or a
// trimmed subset) in their outgoing A2A `message/send` parts.
type ChatUploadedFile struct {
// URI uses a custom "workspace:" scheme so clients can resolve it
// against the streaming Download endpoint regardless of where the
// canvas itself is hosted. The path component is always absolute
// within the workspace container.
URI string `json:"uri"`
Name string `json:"name"`
MimeType string `json:"mimeType,omitempty"`
Size int64 `json:"size"`
}
// Upload handles POST /workspaces/:id/chat/uploads.
// Accepts multipart/form-data with one or more `files` fields, stages
// each under /workspace/.molecule/chat-uploads with a UUID prefix,
// and returns the list of URIs for the caller to attach to an A2A
// message.
func (h *ChatFilesHandler) Upload(c *gin.Context) {
workspaceID := c.Param("id")
if err := validateWorkspaceID(workspaceID); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid workspace ID"})
return
}
// Hard cap the request body BEFORE ParseMultipartForm — otherwise
// a client could chunk-upload past the cap before Go notices.
c.Request.Body = http.MaxBytesReader(c.Writer, c.Request.Body, chatUploadMaxBytes)
if err := c.Request.ParseMultipartForm(chatUploadMaxBytes); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "failed to parse multipart form"})
return
}
form := c.Request.MultipartForm
var headers []*multipart.FileHeader
if form != nil && form.File != nil {
headers = form.File["files"]
}
if len(headers) == 0 {
c.JSON(http.StatusBadRequest, gin.H{"error": "expected at least one 'files' field"})
return
}
ctx := c.Request.Context()
containerName := h.templates.findContainer(ctx, workspaceID)
if containerName == "" {
c.JSON(http.StatusServiceUnavailable, gin.H{"error": "workspace container not running"})
return
}
// Build the archive in memory. Files are byte-preserving through
// Go's string<->[]byte (the tar helper takes map[string]string but
// the conversion is a literal copy, not a UTF-8 reinterpretation).
archive := map[string]string{}
uploaded := make([]ChatUploadedFile, 0, len(headers))
for _, fh := range headers {
if fh.Size > chatUploadMaxFileBytes {
c.JSON(http.StatusRequestEntityTooLarge, gin.H{
"error": fmt.Sprintf("%s exceeds per-file limit (%d MB)", fh.Filename, chatUploadMaxFileBytes/(1024*1024)),
})
return
}
f, err := fh.Open()
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "failed to read upload"})
return
}
// LimitReader guards against a truthful-but-lying Size header:
// if the multipart stream carries more bytes than declared, we
// stop at the cap instead of growing the buffer.
data, err := io.ReadAll(io.LimitReader(f, chatUploadMaxFileBytes+1))
f.Close()
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "failed to read upload"})
return
}
if int64(len(data)) > chatUploadMaxFileBytes {
c.JSON(http.StatusRequestEntityTooLarge, gin.H{
"error": fmt.Sprintf("%s exceeds per-file limit (%d MB)", fh.Filename, chatUploadMaxFileBytes/(1024*1024)),
})
return
}
name := sanitizeFilename(fh.Filename)
// 16-byte (UUID-equivalent) random prefix. Within a single
// batch we also check for collisions — birthday on 128 bits
// is astronomical, but a bad PRNG or single re-used draw
// would silently overwrite a sibling upload with its own
// content and return two URIs pointing at one file.
var stored string
for attempt := 0; attempt < 4; attempt++ {
idBytes := make([]byte, 16)
if _, err := rand.Read(idBytes); err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to allocate upload ID"})
return
}
candidate := hex.EncodeToString(idBytes) + "-" + name
if _, taken := archive[candidate]; !taken {
stored = candidate
break
}
}
if stored == "" {
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to allocate unique upload ID"})
return
}
archive[stored] = string(data)
mt := fh.Header.Get("Content-Type")
if mt == "" {
mt = mime.TypeByExtension(filepath.Ext(name))
}
uploaded = append(uploaded, ChatUploadedFile{
URI: "workspace:" + chatUploadDir + "/" + stored,
Name: name,
MimeType: mt,
Size: int64(len(data)),
})
}
// mkdir -p is idempotent; we fire it every upload instead of
// caching state here so container restarts don't surprise us.
_, _ = h.templates.execInContainer(ctx, containerName, []string{"mkdir", "-p", chatUploadDir})
// Defence in depth: pre-remove each target path before extracting
// the tar. An agent with write access to /workspace could in
// theory race-create a symlink at <chatUploadDir>/<stored-name>
// pointing at a sensitive in-container path (its own /etc/*,
// mounted secrets). Docker's tar extraction on some drivers
// follows pre-existing symlinks at the destination. `rm -f` the
// exact stored-name closes that window — the UUID prefix on the
// name makes a successful race effectively impossible, but this
// guard costs nothing and documents the intent.
rmArgs := []string{"rm", "-f", "--"}
for stored := range archive {
rmArgs = append(rmArgs, chatUploadDir+"/"+stored)
}
_, _ = h.templates.execInContainer(ctx, containerName, rmArgs)
if err := h.copyFlatToContainer(ctx, containerName, chatUploadDir, archive); err != nil {
log.Printf("Chat upload copy failed for %s: %v", workspaceID, err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to stage files in workspace"})
return
}
c.JSON(http.StatusOK, gin.H{"files": uploaded})
}
// copyFlatToContainer extracts one tar of flat files into destPath
// inside the container. Unlike the shared copyFilesToContainer helper
// (which prepends destPath into tar entry names — correct for its
// callers whose files relative-live inside a nested tree), this
// helper writes tar entries with ONLY the flat filename so Docker's
// extraction at destPath lands them directly in destPath, not at
// destPath/destPath/... as the shared helper would.
// Filenames are validated to contain no path separator so nothing
// can escape destPath via an embedded "../" or a leading "/".
func (h *ChatFilesHandler) copyFlatToContainer(ctx context.Context, containerName, destPath string, files map[string]string) error {
if h.templates.docker == nil {
return fmt.Errorf("docker not available")
}
var buf bytes.Buffer
tw := tar.NewWriter(&buf)
for name, content := range files {
if strings.ContainsAny(name, "/\\") || name == ".." || name == "." || name == "" {
return fmt.Errorf("unsafe flat filename: %q", name)
}
data := []byte(content)
if err := tw.WriteHeader(&tar.Header{
Name: name, // relative — Docker resolves against destPath
Mode: 0644,
Size: int64(len(data)),
Typeflag: tar.TypeReg,
}); err != nil {
return fmt.Errorf("tar header %q: %w", name, err)
}
if _, err := tw.Write(data); err != nil {
return fmt.Errorf("tar write %q: %w", name, err)
}
}
if err := tw.Close(); err != nil {
return fmt.Errorf("tar close: %w", err)
}
return h.templates.docker.CopyToContainer(ctx, containerName, destPath, &buf, container.CopyToContainerOptions{})
}
// Download handles GET /workspaces/:id/chat/download?path=<abs path>.
// Streams the file bytes from the container with a correct
// Content-Type and attachment Content-Disposition. Binary-safe —
// unlike the existing JSON ReadFile endpoint which carries content
// as a string (lossy for non-UTF-8 bytes).
func (h *ChatFilesHandler) Download(c *gin.Context) {
workspaceID := c.Param("id")
if err := validateWorkspaceID(workspaceID); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid workspace ID"})
return
}
path := c.Query("path")
if path == "" {
c.JSON(http.StatusBadRequest, gin.H{"error": "path query required"})
return
}
if !filepath.IsAbs(path) {
c.JSON(http.StatusBadRequest, gin.H{"error": "path must be absolute"})
return
}
// Path must land under one of the allowed roots — mirrors the
// ReadFile security model and prevents arbitrary reads of /etc
// or other system paths via this endpoint.
rooted := false
for root := range allowedRoots {
if path == root || strings.HasPrefix(path, root+"/") {
rooted = true
break
}
}
if !rooted {
c.JSON(http.StatusBadRequest, gin.H{"error": "path must be under /configs, /workspace, /home, or /plugins"})
return
}
// Reject anything that canonicalises differently or contains a
// traversal segment. Defence-in-depth on top of the prefix check.
if filepath.Clean(path) != path || strings.Contains(path, "..") {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid path"})
return
}
ctx := c.Request.Context()
if h.templates.docker == nil {
c.JSON(http.StatusServiceUnavailable, gin.H{"error": "docker unavailable"})
return
}
containerName := h.templates.findContainer(ctx, workspaceID)
if containerName == "" {
c.JSON(http.StatusServiceUnavailable, gin.H{"error": "workspace container not running"})
return
}
// docker cp returns a tar stream containing the requested path.
// For a regular file that's a single tar entry; we extract and
// stream the body through.
reader, _, err := h.templates.docker.CopyFromContainer(ctx, containerName, path)
if err != nil {
c.JSON(http.StatusNotFound, gin.H{"error": "file not found"})
return
}
defer reader.Close()
tr := tar.NewReader(reader)
hdr, err := tr.Next()
if err != nil {
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to read archive"})
return
}
if hdr.Typeflag != tar.TypeReg {
c.JSON(http.StatusBadRequest, gin.H{"error": "path is not a regular file"})
return
}
name := filepath.Base(path)
mt := mime.TypeByExtension(filepath.Ext(name))
if mt == "" {
mt = "application/octet-stream"
}
c.Header("Content-Type", mt)
c.Header("Content-Length", fmt.Sprintf("%d", hdr.Size))
c.Header("Content-Disposition", contentDispositionAttachment(name))
c.Status(http.StatusOK)
// Stream exactly hdr.Size bytes. CopyN was chosen over LimitReader
// because it returns an error when the source is short — that
// surfaces a bug in the tar extraction path immediately instead
// of silently truncating. Agents can legitimately produce files
// larger than the 50 MB upload cap (that's a per-request inbound
// cap, not a per-artifact one), so we cannot clamp here.
if _, err := io.CopyN(c.Writer, tr, hdr.Size); err != nil {
log.Printf("Chat download stream error for %s (%s): %v", workspaceID, path, err)
}
}

View File

@ -0,0 +1,194 @@
package handlers
// Unit tests for chat_files.go. The Docker-touching paths (Upload
// actually copying into a container, Download actually streaming tar)
// are exercised via integration tests — docker-in-docker is out of
// scope for the unit suite. These tests cover the validation + error
// surfaces that a caller can reach without a running container.
import (
"bytes"
"mime/multipart"
"net/http"
"net/http/httptest"
"strings"
"testing"
"github.com/gin-gonic/gin"
)
func TestSanitizeFilename(t *testing.T) {
cases := []struct {
in, want string
}{
{"report.pdf", "report.pdf"},
{"my file.pdf", "my_file.pdf"},
{"../../etc/passwd", "passwd"},
{"weird;$name`.txt", "weird__name_.txt"},
{"", "file"},
{".", "file"},
{"..", "file"},
}
for _, tc := range cases {
got := sanitizeFilename(tc.in)
if got != tc.want {
t.Errorf("sanitizeFilename(%q) = %q, want %q", tc.in, got, tc.want)
}
}
}
func TestSanitizeFilename_LongNamePreservesExtension(t *testing.T) {
// 120-char base + .pdf — the helper should truncate the base but
// keep the extension intact so content-type inference still works.
longBase := strings.Repeat("a", 120)
got := sanitizeFilename(longBase + ".pdf")
if len(got) > 100 {
t.Errorf("filename not truncated: len=%d", len(got))
}
if !strings.HasSuffix(got, ".pdf") {
t.Errorf("extension stripped: %q", got)
}
}
func TestChatUpload_InvalidWorkspaceID(t *testing.T) {
setupTestDB(t)
setupTestRedis(t)
tmplh := NewTemplatesHandler(t.TempDir(), nil)
h := NewChatFilesHandler(tmplh)
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "id", Value: "not-a-uuid"}}
c.Request = httptest.NewRequest("POST", "/workspaces/not-a-uuid/chat/uploads", nil)
h.Upload(c)
if w.Code != http.StatusBadRequest {
t.Errorf("expected 400 on invalid workspace id, got %d: %s", w.Code, w.Body.String())
}
}
func TestChatUpload_MissingFiles(t *testing.T) {
setupTestDB(t)
setupTestRedis(t)
tmplh := NewTemplatesHandler(t.TempDir(), nil)
h := NewChatFilesHandler(tmplh)
// Multipart body with no `files` field — only a text field.
var buf bytes.Buffer
mw := multipart.NewWriter(&buf)
_ = mw.WriteField("other", "value")
mw.Close()
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "id", Value: "00000000-0000-0000-0000-000000000001"}}
req := httptest.NewRequest("POST", "/workspaces/00000000-0000-0000-0000-000000000001/chat/uploads", &buf)
req.Header.Set("Content-Type", mw.FormDataContentType())
c.Request = req
h.Upload(c)
if w.Code != http.StatusBadRequest {
t.Errorf("expected 400 when files field missing, got %d: %s", w.Code, w.Body.String())
}
if !strings.Contains(w.Body.String(), "files") {
t.Errorf("expected error to mention files field: %s", w.Body.String())
}
}
func TestChatDownload_InvalidPath(t *testing.T) {
setupTestDB(t)
setupTestRedis(t)
tmplh := NewTemplatesHandler(t.TempDir(), nil)
h := NewChatFilesHandler(tmplh)
cases := []struct {
name, path, wantSubstr string
}{
{"empty", "", "path query required"},
{"relative", "workspace/foo.txt", "must be absolute"},
{"wrong root", "/etc/passwd", "must be under"},
{"traversal", "/workspace/../etc/passwd", "invalid path"},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "id", Value: "00000000-0000-0000-0000-000000000001"}}
req := httptest.NewRequest("GET", "/workspaces/xxx/chat/download?path="+tc.path, nil)
c.Request = req
h.Download(c)
if w.Code != http.StatusBadRequest {
t.Errorf("expected 400 for %s, got %d: %s", tc.name, w.Code, w.Body.String())
}
if !strings.Contains(w.Body.String(), tc.wantSubstr) {
t.Errorf("expected error to contain %q, got: %s", tc.wantSubstr, w.Body.String())
}
})
}
}
func TestContentDispositionAttachment_Escapes(t *testing.T) {
cases := []struct {
name, input, wantSubstr string
}{
{
name: "plain ASCII passes through",
input: "report.pdf",
wantSubstr: `filename="report.pdf"`,
},
{
name: "double-quote is backslash-escaped",
input: `weird".pdf`,
wantSubstr: `filename="weird\".pdf"`,
},
{
name: "CR and LF dropped to prevent header injection",
input: "bad\r\nX-Leak: 1\r\n.txt",
wantSubstr: `filename="badX-Leak: 1.txt"`,
},
{
name: "non-ASCII emits filename* percent-encoded",
input: "résumé.pdf",
wantSubstr: "filename*=UTF-8''r%C3%A9sum%C3%A9.pdf",
},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
got := contentDispositionAttachment(tc.input)
if !strings.Contains(got, tc.wantSubstr) {
t.Errorf("contentDispositionAttachment(%q) = %q, missing substring %q", tc.input, got, tc.wantSubstr)
}
// Must never contain a bare CR or LF — either would end the header.
if strings.ContainsAny(got, "\r\n") {
t.Errorf("header contains CR/LF: %q", got)
}
})
}
}
func TestChatDownload_DockerUnavailable(t *testing.T) {
setupTestDB(t)
setupTestRedis(t)
tmplh := NewTemplatesHandler(t.TempDir(), nil) // docker=nil
h := NewChatFilesHandler(tmplh)
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "id", Value: "00000000-0000-0000-0000-000000000001"}}
req := httptest.NewRequest("GET", "/workspaces/xxx/chat/download?path=/workspace/report.pdf", nil)
c.Request = req
h.Download(c)
if w.Code != http.StatusServiceUnavailable {
t.Errorf("expected 503 when docker is nil, got %d: %s", w.Code, w.Body.String())
}
}

View File

@ -414,7 +414,8 @@ func (h *OrgHandler) Import(c *gin.Context) {
// using subtree-aware grid slots (children that are themselves
// parents get a bigger slot so they don't overflow into siblings).
for _, ws := range tmpl.Workspaces {
if err := h.createWorkspaceTree(ws, nil, ws.Canvas.X, ws.Canvas.Y, tmpl.Defaults, orgBaseDir, &results, provisionSem); err != nil {
// Root: relX/relY == absX/absY (no parent to be relative to).
if err := h.createWorkspaceTree(ws, nil, ws.Canvas.X, ws.Canvas.Y, ws.Canvas.X, ws.Canvas.Y, tmpl.Defaults, orgBaseDir, &results, provisionSem); err != nil {
createErr = err
break
}

View File

@ -28,7 +28,13 @@ import (
// parent.abs + childSlotInGrid(index, siblingSizes) computed by the
// caller. Storing already-absolute coords means a child that is itself
// a parent can simply compound the grid without any per-call math.
func (h *OrgHandler) createWorkspaceTree(ws OrgWorkspace, parentID *string, absX, absY float64, defaults OrgDefaults, orgBaseDir string, results *[]map[string]interface{}, provisionSem chan struct{}) error {
// relX / relY are THIS workspace's position RELATIVE to its parent's
// absolute origin (i.e. childSlotInGrid output for children; 0,0 for
// roots since a root's absolute IS its relative). The broadcast
// payload ships relative coords so the canvas can drop the node
// straight into the parent's child-coordinate space without doing a
// canvas-wide absolute-position walk.
func (h *OrgHandler) createWorkspaceTree(ws OrgWorkspace, parentID *string, absX, absY, relX, relY float64, defaults OrgDefaults, orgBaseDir string, results *[]map[string]interface{}, provisionSem chan struct{}) error {
// Apply defaults
runtime := ws.Runtime
if runtime == "" {
@ -128,10 +134,23 @@ func (h *OrgHandler) createWorkspaceTree(ws OrgWorkspace, parentID *string, absX
}
// Broadcast — include runtime so the canvas pill renders the right
// badge immediately instead of "unknown".
h.broadcaster.RecordAndBroadcast(ctx, "WORKSPACE_PROVISIONING", id, map[string]interface{}{
// badge immediately instead of "unknown". parent_id + x/y let the
// canvas's org-deploy animation spawn the child from the parent's
// current coords and tween into its reserved slot, instead of
// landing in a default grid position first and snapping on the
// next hydrate.
payload := map[string]interface{}{
"name": ws.Name, "tier": tier, "runtime": runtime,
})
// Parent-relative coords — the canvas's React Flow node uses
// these as the node's position when parent_id is set (React
// Flow treats node.position as parent-relative when the node
// has a parentId). For roots, relX/relY == absX/absY.
"x": relX, "y": relY,
}
if parentID != nil {
payload["parent_id"] = *parentID
}
h.broadcaster.RecordAndBroadcast(ctx, "WORKSPACE_PROVISIONING", id, payload)
// Seed initial memories from workspace config or defaults (issue #1050).
// Per-workspace initial_memories override defaults; if workspace has none,
@ -509,7 +528,9 @@ func (h *OrgHandler) createWorkspaceTree(ws OrgWorkspace, parentID *string, absX
slotX, slotY := childSlotInGrid(i, siblingSizes)
childAbsX := absX + slotX
childAbsY := absY + slotY
if err := h.createWorkspaceTree(child, &id, childAbsX, childAbsY, defaults, orgBaseDir, results, provisionSem); err != nil {
// slotX/slotY are already parent-relative — that's
// exactly what childSlotInGrid returns.
if err := h.createWorkspaceTree(child, &id, childAbsX, childAbsY, slotX, slotY, defaults, orgBaseDir, results, provisionSem); err != nil {
return err
}
time.Sleep(workspaceCreatePacingMs * time.Millisecond)

View File

@ -466,3 +466,70 @@ func (h *SecretsHandler) GetModel(c *gin.Context) {
c.JSON(http.StatusOK, gin.H{"model": string(decrypted), "source": "workspace_secrets"})
}
// SetModel handles PUT /workspaces/:id/model — writes the model slug
// into workspace_secrets as MODEL_PROVIDER (the key GetModel reads).
// For hermes, the value is a hermes-native slug like "minimax/MiniMax-M2.7";
// for langgraph it's the legacy "provider:model" form. Either way it's just
// an opaque string the runtime interprets on its next start.
//
// Empty string clears the override. Triggers auto-restart so the new
// env (HERMES_DEFAULT_MODEL etc.) takes effect immediately — without
// this the user clicks Save+Restart, the canvas PUT lands, but the
// already-restarting container misses the window and boots with the
// old value.
func (h *SecretsHandler) SetModel(c *gin.Context) {
workspaceID := c.Param("id")
if !uuidRegex.MatchString(workspaceID) {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid workspace ID"})
return
}
ctx := c.Request.Context()
var body struct {
Model string `json:"model"`
}
if err := c.ShouldBindJSON(&body); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid request body"})
return
}
if body.Model == "" {
if _, err := db.DB.ExecContext(ctx,
`DELETE FROM workspace_secrets WHERE workspace_id = $1 AND key = 'MODEL_PROVIDER'`,
workspaceID); err != nil {
log.Printf("SetModel delete error: %v", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to clear model"})
return
}
if h.restartFunc != nil {
go h.restartFunc(workspaceID)
}
c.JSON(http.StatusOK, gin.H{"status": "cleared"})
return
}
encrypted, err := crypto.Encrypt([]byte(body.Model))
if err != nil {
log.Printf("SetModel encrypt error: %v", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to encrypt model"})
return
}
version := crypto.CurrentEncryptionVersion()
_, err = db.DB.ExecContext(ctx, `
INSERT INTO workspace_secrets (workspace_id, key, encrypted_value, encryption_version)
VALUES ($1, 'MODEL_PROVIDER', $2, $3)
ON CONFLICT (workspace_id, key) DO UPDATE
SET encrypted_value = $2, encryption_version = $3, updated_at = now()
`, workspaceID, encrypted, version)
if err != nil {
log.Printf("SetModel upsert error: %v", err)
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to save model"})
return
}
if h.restartFunc != nil {
go h.restartFunc(workspaceID)
}
c.JSON(http.StatusOK, gin.H{"status": "saved", "model": body.Model})
}

View File

@ -6,6 +6,7 @@ import (
"encoding/json"
"net/http"
"net/http/httptest"
"strings"
"testing"
"time"
@ -535,6 +536,88 @@ func TestSecretsGetModel_DBError(t *testing.T) {
}
}
// ==================== SetModel ====================
func TestSecretsSetModel_Upsert(t *testing.T) {
mock := setupTestDB(t)
setupTestRedis(t)
restartCalled := make(chan string, 1)
handler := NewSecretsHandler(func(id string) { restartCalled <- id })
mock.ExpectExec(`INSERT INTO workspace_secrets`).
WithArgs("00000000-0000-0000-0000-000000000001", sqlmock.AnyArg(), sqlmock.AnyArg()).
WillReturnResult(sqlmock.NewResult(1, 1))
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "id", Value: "00000000-0000-0000-0000-000000000001"}}
c.Request = httptest.NewRequest("PUT", "/workspaces/00000000-0000-0000-0000-000000000001/model",
strings.NewReader(`{"model":"minimax/MiniMax-M2.7"}`))
c.Request.Header.Set("Content-Type", "application/json")
handler.SetModel(c)
if w.Code != http.StatusOK {
t.Fatalf("expected 200, got %d: %s", w.Code, w.Body.String())
}
select {
case id := <-restartCalled:
if id != "00000000-0000-0000-0000-000000000001" {
t.Errorf("restart called with wrong id: %s", id)
}
case <-time.After(500 * time.Millisecond):
t.Error("restart was not triggered")
}
if err := mock.ExpectationsWereMet(); err != nil {
t.Errorf("unmet sqlmock expectations: %v", err)
}
}
func TestSecretsSetModel_EmptyClears(t *testing.T) {
mock := setupTestDB(t)
setupTestRedis(t)
handler := NewSecretsHandler(func(string) {})
mock.ExpectExec(`DELETE FROM workspace_secrets`).
WithArgs("00000000-0000-0000-0000-000000000002").
WillReturnResult(sqlmock.NewResult(0, 1))
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "id", Value: "00000000-0000-0000-0000-000000000002"}}
c.Request = httptest.NewRequest("PUT", "/workspaces/00000000-0000-0000-0000-000000000002/model",
strings.NewReader(`{"model":""}`))
c.Request.Header.Set("Content-Type", "application/json")
handler.SetModel(c)
if w.Code != http.StatusOK {
t.Fatalf("expected 200, got %d: %s", w.Code, w.Body.String())
}
if err := mock.ExpectationsWereMet(); err != nil {
t.Errorf("unmet sqlmock expectations: %v", err)
}
}
func TestSecretsSetModel_InvalidID(t *testing.T) {
setupTestDB(t)
setupTestRedis(t)
handler := NewSecretsHandler(nil)
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
c.Params = gin.Params{{Key: "id", Value: "not-a-uuid"}}
c.Request = httptest.NewRequest("PUT", "/workspaces/not-a-uuid/model",
strings.NewReader(`{"model":"x"}`))
c.Request.Header.Set("Content-Type", "application/json")
handler.SetModel(c)
if w.Code != http.StatusBadRequest {
t.Errorf("expected 400 for bad UUID, got %d", w.Code)
}
}
// ==================== Values — Phase 30.2 decrypted pull ====================
// These tests target the secrets.Values handler (GET /workspaces/:id/secrets/values)

View File

@ -168,20 +168,33 @@ func (h *WorkspaceHandler) provisionWorkspaceOpts(workspaceID, templatePath stri
// Try to recover by applying the runtime-default template. payload.Runtime
// is populated by the caller (Restart handler / Create handler) from the
// DB row — same source of truth the apply_template=true path uses.
// Try `<runtime>-default` first (historical naming), then plain
// `<runtime>` (current naming in workspace-configs-templates/).
// Only claude-code has the `-default` suffix; every other
// runtime directory uses the bare name. Without the bare-name
// fallback, recovery only worked for claude-code and blank
// workspaces on every other runtime bricked on first start.
recovered := false
if payload.Runtime != "" {
runtimeTemplate := filepath.Join(h.configsDir, payload.Runtime+"-default")
if _, statErr := os.Stat(runtimeTemplate); statErr == nil {
log.Printf("Provisioner: auto-recover for %s — config volume empty, applying %s-default template (#1858)",
workspaceID, payload.Runtime)
templatePath = runtimeTemplate
// Rebuild cfg with the recovered template path so Start() sees it.
cfg = h.buildProvisionerConfig(workspaceID, templatePath, configFiles, payload, envVars, pluginsPath, awarenessNamespace)
cfg.ResetClaudeSession = resetClaudeSession
recovered = true
} else {
log.Printf("Provisioner: auto-recover for %s — runtime template %s not found: %v",
workspaceID, runtimeTemplate, statErr)
candidates := []string{
filepath.Join(h.configsDir, payload.Runtime+"-default"),
filepath.Join(h.configsDir, payload.Runtime),
}
for _, runtimeTemplate := range candidates {
if _, statErr := os.Stat(runtimeTemplate); statErr == nil {
log.Printf("Provisioner: auto-recover for %s — config volume empty, applying %s template (#1858)",
workspaceID, filepath.Base(runtimeTemplate))
templatePath = runtimeTemplate
// Rebuild cfg with the recovered template path so Start() sees it.
cfg = h.buildProvisionerConfig(workspaceID, templatePath, configFiles, payload, envVars, pluginsPath, awarenessNamespace)
cfg.ResetClaudeSession = resetClaudeSession
recovered = true
break
}
}
if !recovered {
log.Printf("Provisioner: auto-recover for %s — no template found under %s for runtime=%s",
workspaceID, h.configsDir, payload.Runtime)
}
}
@ -608,6 +621,17 @@ func (h *WorkspaceHandler) ensureDefaultConfig(workspaceID string, payload model
// payload.Model at boot), this is a no-op — no harm in the switch
// being empty for those cases.
func applyRuntimeModelEnv(envVars map[string]string, runtime, model string) {
// Fall back to the MODEL_PROVIDER workspace secret when the caller
// didn't pass one explicitly. This is the path that "Save+Restart"
// hits — Restart builds its payload from the workspaces row (no model
// column there) so payload.Model is always empty, but the user's
// canvas selection was stored as MODEL_PROVIDER via PUT /model and
// is already loaded into envVars here. Without this fallback hermes
// silently boots with the template default and errors "No LLM
// provider configured" even though the user picked a valid model.
if model == "" {
model = envVars["MODEL_PROVIDER"]
}
if model == "" {
return
}

View File

@ -308,6 +308,7 @@ func Setup(hub *ws.Hub, broadcaster *events.Broadcaster, prov *provisioner.Provi
wsAuth.PUT("/secrets", sech.Set)
wsAuth.DELETE("/secrets/:key", sech.Delete)
wsAuth.GET("/model", sech.GetModel)
wsAuth.PUT("/model", sech.SetModel)
// Token usage metrics — cost transparency (#593).
// WorkspaceAuth middleware (on wsAuth) binds the bearer to :id.
@ -461,6 +462,14 @@ func Setup(hub *ws.Hub, broadcaster *events.Broadcaster, prov *provisioner.Provi
wsAuth.PUT("/files/*path", tmplh.WriteFile)
wsAuth.DELETE("/files/*path", tmplh.DeleteFile)
// Chat attachments — file upload (user → agent) and binary-safe
// streaming download (agent → user). Namespaced under /chat/ so
// the security model is obviously distinct from /files/* (which
// handles workspace config/templates and has a different caller).
chatfh := handlers.NewChatFilesHandler(tmplh)
wsAuth.POST("/chat/uploads", chatfh.Upload)
wsAuth.GET("/chat/download", chatfh.Download)
// Plugins
pluginsDir := findPluginsDir(configsDir)
// Runtime lookup lets the plugins handler filter the registry to plugins

View File

@ -48,6 +48,10 @@ from shared_runtime import (
brief_task,
set_current_task,
)
from executor_helpers import (
collect_outbound_files,
extract_attached_files,
)
from builtin_tools.telemetry import (
A2A_TASK_ID,
GEN_AI_OPERATION_NAME,
@ -211,6 +215,18 @@ class LangGraphA2AExecutor(AgentExecutor):
3. Message(final_text) terminal event
"""
user_input = extract_message_text(context)
# Pull attached files from A2A message parts (kind: "file") and
# append a manifest to the prompt so the agent knows they exist.
# LangGraph tools (filesystem, bash, skills) can then open the
# files by path — without this the agent silently ignores the
# attachments and replies "I'm not sure what you're referring to".
_attached_files = extract_attached_files(getattr(context, "message", None))
if _attached_files:
_manifest = "\n\nAttached files:\n" + "\n".join(
f"- {f['name']} ({f['mime_type'] or 'unknown type'}) at {f['path']}"
for f in _attached_files
)
user_input = (user_input + _manifest) if user_input else _manifest.lstrip()
if not user_input:
parts = getattr(getattr(context, "message", None), "parts", None)
logger.warning("A2A execute: no text content in message parts: %s", parts)
@ -411,7 +427,31 @@ class LangGraphA2AExecutor(AgentExecutor):
# Non-streaming: ResultAggregator.consume_all() returns this
# immediately as the response (a2a_client.py reads .parts[0].text).
# Streaming: yielded as the last SSE event in the stream.
msg = new_agent_text_message(final_text, task_id=task_id, context_id=context_id)
#
# If the reply mentions /workspace/... paths, stage each one
# and emit as FileParts alongside the text so the canvas can
# render a download button. Same contract the hermes executor
# uses — every runtime going through this code path (langgraph,
# deepagents, future ReAct variants) inherits it.
_outbound = collect_outbound_files(final_text)
if _outbound:
from a2a.types import FilePart, FileWithUri, Message, Part, Role, TextPart
_parts: list[Part] = [Part(root=TextPart(text=final_text))] if final_text else []
for f in _outbound:
_parts.append(Part(root=FilePart(file=FileWithUri(
uri="workspace:" + f["path"],
name=f["name"],
mimeType=f["mime_type"],
))))
msg = Message(
messageId=uuid.uuid4().hex,
role=Role.agent,
parts=_parts,
taskId=task_id,
contextId=context_id,
)
else:
msg = new_agent_text_message(final_text, task_id=task_id, context_id=context_id)
# Attach tool_trace via metadata when supported. Guarded with
# hasattr because some test mocks return a plain string here.
if tool_trace and hasattr(msg, "metadata"):

View File

@ -47,7 +47,9 @@ from executor_helpers import (
WORKSPACE_MOUNT,
auto_push_hook,
brief_summary,
collect_outbound_files,
commit_memory,
extract_attached_files,
extract_message_text,
get_a2a_instructions,
get_hma_instructions,
@ -365,6 +367,18 @@ class ClaudeSDKExecutor(AgentExecutor):
workspace queue rather than racing on `_session_id` / `_active_stream`.
"""
user_input = extract_message_text(context.message)
# Surface attached files to claude-code via a manifest in the prompt.
# Claude Code reads files through its own Read/Glob tools by path —
# as long as the prompt names the path, the CLI will open them on
# demand. Same contract every platform runtime uses so the UX is
# identical across hermes / langgraph / claude-code.
attached = extract_attached_files(context.message)
if attached:
manifest = "\n\nAttached files:\n" + "\n".join(
f"- {f['name']} ({f['mime_type'] or 'unknown type'}) at {f['path']}"
for f in attached
)
user_input = (user_input + manifest) if user_input else manifest.lstrip()
if not user_input:
await event_queue.enqueue_event(new_agent_text_message(_NO_TEXT_MSG))
return
@ -375,7 +389,26 @@ class ClaudeSDKExecutor(AgentExecutor):
# Enqueue outside the lock so the next queued turn can start
# preparing its prompt while this turn's response ships. Event
# ordering is preserved per-queue by the A2A server, so no races.
await event_queue.enqueue_event(new_agent_text_message(response_text))
# If the response mentions /workspace/... files, stage each and
# emit FileParts alongside the text so the canvas can download.
outbound = collect_outbound_files(response_text)
if outbound:
from a2a.types import FilePart, FileWithUri, Message, Part, Role, TextPart
import uuid as _uuid
parts: list = [Part(root=TextPart(text=response_text))] if response_text else []
for f in outbound:
parts.append(Part(root=FilePart(file=FileWithUri(
uri="workspace:" + f["path"],
name=f["name"],
mimeType=f["mime_type"],
))))
await event_queue.enqueue_event(Message(
messageId=_uuid.uuid4().hex,
role=Role.agent,
parts=parts,
))
else:
await event_queue.enqueue_event(new_agent_text_message(response_text))
@staticmethod
def _is_retryable(exc: BaseException) -> bool:

View File

@ -10,16 +10,22 @@ Provides:
- Brief task summary extraction (markdown-aware)
- Error message sanitization (exception classes and subprocess categories)
- Shared workspace path constants and the MCP server path resolver
- Attached-file extraction and outbound-file staging (platform-wide chat
attachments every runtime routes through these helpers so the
drag-dropped image / returned report experience is identical)
"""
from __future__ import annotations
import asyncio
import base64
import json
import logging
import mimetypes
import os
import re
import subprocess
import uuid as _uuid
from pathlib import Path
from typing import TYPE_CHECKING, Any
@ -582,3 +588,276 @@ async def auto_push_hook(cwd: str | None = None) -> None:
await asyncio.to_thread(_auto_push_and_pr_sync, cwd)
except Exception:
logger.exception("auto_push_hook: failed (non-fatal)")
# ========================================================================
# Chat attachments — platform-level support for drag-drop uploads and
# agent-returned files. Every runtime executor routes inbound file parts
# through ``extract_attached_files`` + ``build_user_content_with_files``
# and post-processes replies through ``collect_outbound_files`` so a file
# attached in the canvas shows up correctly across hermes, claude-code,
# langgraph, CLI runtimes, etc. Living here (not in any one executor)
# keeps the attachment contract in one place — match canvas/ChatTab.tsx
# and workspace-server/internal/handlers/chat_files.go, and every runtime
# benefits at once.
# ========================================================================
# Matches CHAT_UPLOAD_DIR in workspace-server/internal/handlers/chat_files.go.
# The canvas uploads files here; outbound files get staged here so the
# download endpoint (which whitelists this directory) can serve them.
CHAT_UPLOADS_DIR = f"{WORKSPACE_MOUNT}/.molecule/chat-uploads"
def ensure_workspace_writable() -> None:
"""Make /workspace (and the chat-uploads dir) writable by whoever the
agent will run as.
Docker's default for a new named volume is root-owned 755 — that
bricks the agentuser "write a file, hand it to the user" flow for
every template whose agent runs under a non-root user (hermes uses
`agent`, most others use some dedicated UID too). Each Dockerfile
solving this individually was the anti-pattern; this helper belongs
to the platform so every runtime picks up the fix by calling into
``molecule_runtime`` during boot.
Runs best-effort: if molecule-runtime itself started as non-root
(rare, but possible in some CP configurations), the chmod silently
no-ops the template's own start.sh is expected to have already
handled perms in that case. We prefer silent degradation to a hard
boot failure because misconfigured perms are recoverable (user gets
a clear "permission denied" from the agent) but an uncatchable
exception here would wedge the whole workspace in `provisioning`.
"""
# 777 matches the intent: one container, one tenant, anyone in the
# container can read/write workspace files. Cross-tenant isolation
# happens at the Docker boundary, not inside the volume.
for path in (WORKSPACE_MOUNT, CHAT_UPLOADS_DIR):
try:
os.makedirs(path, exist_ok=True)
os.chmod(path, 0o777)
except PermissionError:
logger.info(
"ensure_workspace_writable: lacking root (non-fatal) for %s", path
)
except OSError as exc:
logger.warning(
"ensure_workspace_writable: %s for %s", exc, path
)
# Cap image inlining so a 25MB PNG doesn't blow past provider context
# limits. Images larger than this fall back to a path mention only —
# the agent can still read them via file_read / bash tools.
MAX_INLINE_ATTACHMENT_BYTES = 8 * 1024 * 1024
# Absolute /workspace/... paths the agent may mention in its reply.
# Leading boundary prevents matching the middle of URLs like
# https://example.com/workspace/foo while allowing markdown emphasis
# wrappers (**, *, _, `, (, [) so "**/workspace/x.pdf**" still matches.
# Trailing '.' is stripped post-capture (see collect_outbound_files).
_WORKSPACE_PATH_RE = re.compile(
r"(?:^|[\s`\"'*_(\[])(/workspace/[A-Za-z0-9_./\-]+)"
)
_UNSAFE_NAME_RE = re.compile(r"[^A-Za-z0-9._\-]")
def resolve_attachment_uri(uri: str) -> str | None:
"""Resolve a canvas-issued attachment URI to an in-container path.
Accepted shapes (matches canvas uploads.ts + chat_files.go):
- ``workspace:/workspace/.molecule/chat-uploads/<name>`` (canonical)
- ``file:///workspace/...`` (legacy)
- ``/workspace/...`` (bare)
Anything resolving outside ``/workspace`` is refused. ``Path.resolve``
collapses ``..`` segments so a crafted ``workspace:/workspace/../etc/passwd``
returns None instead of leaking the real filesystem.
"""
if not uri:
return None
path: str | None = None
if uri.startswith("workspace:"):
path = uri[len("workspace:"):]
elif uri.startswith("file://"):
path = uri[len("file://"):]
elif uri.startswith("/"):
path = uri
if not path:
return None
try:
resolved = str(Path(path).resolve())
except (OSError, RuntimeError):
return None
if not (resolved == WORKSPACE_MOUNT or resolved.startswith(WORKSPACE_MOUNT + "/")):
return None
return resolved
def extract_attached_files(message: Any) -> list[dict[str, str]]:
"""Pull ``{name, mime_type, path}`` dicts out of an A2A message.
Handles the discriminated-union shape ``part.root.file`` that a2a-sdk
produces via Pydantic RootModel, and the flatter ``part.file`` shape
hand-built callers sometimes emit. Non-file parts and files with
unresolvable URIs are skipped the caller sees an empty list rather
than a mix of valid and broken entries.
"""
if message is None:
return []
parts = getattr(message, "parts", None) or []
out: list[dict[str, str]] = []
for part in parts:
root = getattr(part, "root", part)
if getattr(root, "kind", None) != "file":
continue
f = getattr(root, "file", None)
if f is None:
continue
uri = getattr(f, "uri", "") or ""
name = getattr(f, "name", "") or ""
mime = getattr(f, "mimeType", None) or getattr(f, "mime_type", None) or ""
path = resolve_attachment_uri(uri)
if not path or not os.path.isfile(path):
logger.warning("skipping attached file with unresolvable uri=%r", uri)
continue
out.append({"name": name, "mime_type": mime, "path": path})
return out
def _read_as_data_url(path: str, mime_type: str) -> str | None:
"""Return ``data:<mime>;base64,<...>`` or None if too large / unreadable."""
try:
size = os.path.getsize(path)
except OSError:
return None
if size > MAX_INLINE_ATTACHMENT_BYTES:
logger.info(
"attachment %s too large to inline (%d bytes > cap)", path, size
)
return None
try:
with open(path, "rb") as fh:
b64 = base64.b64encode(fh.read()).decode("ascii")
except OSError as exc:
logger.warning("failed to read attachment %s: %s", path, exc)
return None
return f"data:{mime_type or 'application/octet-stream'};base64,{b64}"
def build_user_content_with_files(
user_text: str, attached: list[dict[str, str]]
) -> Any:
"""Combine text + attachments into an OpenAI-compat ``content`` field.
- No attachments plain string (preserves simple shape for non-vision
models).
- Any image attachment list-of-parts with text + image_url entries
(multi-modal; vision-capable models see the image bytes). Skipped
when ``MOLECULE_DISABLE_IMAGE_INLINING`` is truthy some provider/
model combos (e.g. MiniMax's hermes-agent adapter as of 2026-04)
claim vision support but hang indefinitely on image payloads, and
the caller may prefer manifest-only so the agent can still use its
file_read tool instead of stalling the whole request.
- Non-image attachments manifest appended to the text so the agent
knows the filenames + absolute paths and can inspect via its
file_read / bash tools.
This is the platform's one-line fix for "agent didn't know I attached
a file": any executor that calls it gets attachment awareness for
free, regardless of which LLM provider is behind it.
"""
if not attached:
return user_text
manifest_lines = [
f"- {f['name']} ({f['mime_type'] or 'unknown type'}) at {f['path']}"
for f in attached
]
manifest = "Attached files:\n" + "\n".join(manifest_lines)
combined = f"{user_text}\n\n{manifest}" if user_text else manifest
disable_inline = os.environ.get("MOLECULE_DISABLE_IMAGE_INLINING", "").lower() in (
"1", "true", "yes", "on",
)
if disable_inline or not any(
(f["mime_type"] or "").startswith("image/") for f in attached
):
return combined
content: list[dict[str, Any]] = [{"type": "text", "text": combined}]
for f in attached:
mt = f["mime_type"] or ""
if not mt.startswith("image/"):
continue
data_url = _read_as_data_url(f["path"], mt)
if data_url is not None:
content.append({"type": "image_url", "image_url": {"url": data_url}})
return content
def _sanitize_attachment_name(name: str) -> str:
cleaned = _UNSAFE_NAME_RE.sub("_", name) or "file"
return cleaned[:100]
def _guess_mime(path: str) -> str:
mt, _ = mimetypes.guess_type(path)
return mt or "application/octet-stream"
def stage_outbound_file(src_path: str) -> dict[str, str] | None:
"""Copy ``src_path`` into ``CHAT_UPLOADS_DIR`` (unless already there)
and return ``{name, mime_type, path}`` so the caller can attach it to
the A2A reply.
Files already in the chat-uploads directory are attached as-is;
anything elsewhere under /workspace gets a uuid-prefixed copy so
basenames can't collide with existing uploads and the original
workspace layout stays untouched. Returns None on I/O failure.
"""
try:
os.makedirs(CHAT_UPLOADS_DIR, exist_ok=True)
except OSError as exc:
logger.warning("cannot ensure chat-uploads dir: %s", exc)
return None
name = os.path.basename(src_path)
mime = _guess_mime(src_path)
if os.path.dirname(src_path) == CHAT_UPLOADS_DIR:
return {"name": name, "mime_type": mime, "path": src_path}
try:
stored = f"{_uuid.uuid4().hex[:16]}-{_sanitize_attachment_name(name)}"
dst = os.path.join(CHAT_UPLOADS_DIR, stored)
with open(src_path, "rb") as fin, open(dst, "wb") as fout:
fout.write(fin.read())
except OSError as exc:
logger.warning("failed to stage %s → chat-uploads: %s", src_path, exc)
return None
return {"name": name, "mime_type": mime, "path": dst}
def collect_outbound_files(reply_text: str) -> list[dict[str, str]]:
"""Detect /workspace/... paths the agent mentioned in its reply and
stage each one so it can be returned to the canvas as a file part.
Each unique, readable file goes through ``stage_outbound_file`` the
download endpoint only serves files from whitelisted directories, so
a reply referencing /workspace/private/secret.pem still can't be
exfiltrated via the chat download link unless we've explicitly
copied it under the chat-uploads dir.
"""
if not reply_text:
return []
seen: set[str] = set()
out: list[dict[str, str]] = []
for match in _WORKSPACE_PATH_RE.finditer(reply_text):
# Trim trailing sentence punctuation that the character class
# greedily swallowed — "wrote /workspace/x.txt." would otherwise
# resolve to "x.txt." which doesn't exist.
raw = match.group(1).rstrip(".")
resolved = resolve_attachment_uri(raw)
if not resolved or resolved in seen or not os.path.isfile(resolved):
continue
seen.add(resolved)
staged = stage_outbound_file(resolved)
if staged is not None:
out.append(staged)
return out

View File

@ -69,6 +69,15 @@ async def main(): # pragma: no cover
# 0. Initialise OpenTelemetry (no-op if packages not installed)
setup_telemetry(service_name=workspace_id)
# 0a. Fix /workspace perms before any agent code runs. Docker ships
# named volumes as root:root 755 — without this the non-root agent
# user can't write files the user asked it to produce, and the
# "agent → file → user downloads" flow dead-ends at a bash "permission
# denied". Best-effort: no-ops silently if molecule-runtime itself
# isn't root (template's own start.sh should have handled it there).
from executor_helpers import ensure_workspace_writable
ensure_workspace_writable()
# 1. Load config
config = load_config(config_path)
port = config.a2a.port

View File

@ -654,3 +654,255 @@ def test_classify_subprocess_error_generic_fallback():
assert classify_subprocess_error("generic unknown failure", None) == "subprocess_error"
# exit_code=0 with no keyword match also lands here
assert classify_subprocess_error("mysterious but zero exit", 0) == "subprocess_error"
# ============================================================================
# Chat attachment helpers (drag-drop file + agent-returned file)
# ============================================================================
def test_resolve_attachment_uri_all_schemes(tmp_path, monkeypatch):
"""All three canvas-issued URI shapes resolve to the same container path.
The canvas mints ``workspace:`` but the download endpoint used to accept
``file:///`` and bare ``/workspace/`` for legacy agents the helper has
to handle all three so agents don't have to normalize before calling us.
"""
from executor_helpers import resolve_attachment_uri, WORKSPACE_MOUNT
# Use a real path that starts with WORKSPACE_MOUNT. resolve() enforces
# the containment check — anything outside /workspace/ must return None.
ws_path = f"{WORKSPACE_MOUNT}/foo.txt"
assert resolve_attachment_uri(f"workspace:{ws_path}") == ws_path
assert resolve_attachment_uri(f"file://{ws_path}") == ws_path
assert resolve_attachment_uri(ws_path) == ws_path
# Out-of-tree is refused even when the raw path shape looks right.
# CWE-22 regression: a crafted "workspace:/workspace/../etc/passwd"
# must NOT return "/etc/passwd" just because resolve() normalizes it.
assert resolve_attachment_uri("/etc/passwd") is None
assert resolve_attachment_uri("workspace:/workspace/../etc/passwd") is None
assert resolve_attachment_uri("") is None
assert resolve_attachment_uri("https://example.com/x") is None
def test_extract_attached_files_skips_unresolvable():
"""Files with URIs that don't resolve to an existing file are dropped.
A crafted A2A message can include any uri it wants; we must not hand
non-existent or out-of-tree paths to downstream code as if they were
real attachments.
"""
from types import SimpleNamespace
from executor_helpers import extract_attached_files
msg = SimpleNamespace(parts=[
SimpleNamespace(kind="file", file=SimpleNamespace(
uri="workspace:/etc/passwd", name="x", mimeType="text/plain"
)),
SimpleNamespace(root=SimpleNamespace(kind="file", file=SimpleNamespace(
uri="/workspace/does-not-exist", name="y", mimeType="text/plain"
))),
SimpleNamespace(kind="text", text="ignored"),
])
assert extract_attached_files(msg) == []
def test_extract_attached_files_accepts_both_shapes(tmp_path, monkeypatch):
"""a2a-sdk emits ``part.root.file`` via RootModel; some callers still
build ``part.file`` directly. Both shapes have to yield the same
dict structure runtimes can pick either without surprise."""
from types import SimpleNamespace
from executor_helpers import extract_attached_files
# Stage two real files under a fake /workspace for the resolver
real_a = tmp_path / "a.txt"
real_b = tmp_path / "b.txt"
real_a.write_text("A")
real_b.write_text("B")
# Point the helper's containment check at tmp_path instead of /workspace
monkeypatch.setattr("executor_helpers.WORKSPACE_MOUNT", str(tmp_path))
msg = SimpleNamespace(parts=[
SimpleNamespace(kind="file", file=SimpleNamespace(
uri=f"workspace:{real_a}", name="a.txt", mimeType="text/plain"
)),
SimpleNamespace(root=SimpleNamespace(kind="file", file=SimpleNamespace(
uri=f"workspace:{real_b}", name="b.txt", mimeType="text/plain"
))),
])
out = extract_attached_files(msg)
assert len(out) == 2
assert {f["name"] for f in out} == {"a.txt", "b.txt"}
def test_build_user_content_with_files_no_attachments_is_string():
"""Zero attachments → plain string so models without multi-modal
support (most non-vision LLMs) see the same payload shape they always
did. Regressing this would break every runtime that assumed
content is a string."""
from executor_helpers import build_user_content_with_files
out = build_user_content_with_files("hello", [])
assert out == "hello"
def test_build_user_content_with_files_non_image_is_string_with_manifest():
"""Non-image attachments append a manifest line so the agent knows the
filename and absolute path. Without this the agent had no signal that
anything was attached see canvas/src/components/tabs/ChatTab.tsx
and the "I'm not sure what you're referring to" user report."""
from executor_helpers import build_user_content_with_files
content = build_user_content_with_files("read this", [
{"name": "app.log", "mime_type": "text/plain", "path": "/workspace/app.log"},
])
assert isinstance(content, str)
assert "app.log" in content and "/workspace/app.log" in content
assert "read this" in content
def test_build_user_content_with_files_image_is_multimodal(tmp_path):
"""Image attachments yield the OpenAI-compat list-of-parts shape so
vision models see the bytes. Data URL check covers the common
regression where an empty/missing file silently drops the image part."""
from executor_helpers import build_user_content_with_files
# Minimal 1x1 PNG
png = tmp_path / "x.png"
png.write_bytes(bytes.fromhex(
"89504e470d0a1a0a0000000d49484452000000010000000108060000001f"
"15c4890000000a49444154789c6300010000000500010d0a2db40000000049454e44ae426082"
))
content = build_user_content_with_files("describe", [
{"name": "x.png", "mime_type": "image/png", "path": str(png)},
])
assert isinstance(content, list)
assert len(content) == 2
assert content[0]["type"] == "text"
assert content[1]["type"] == "image_url"
assert content[1]["image_url"]["url"].startswith("data:image/png;base64,")
def test_build_user_content_with_files_large_image_skipped(tmp_path, monkeypatch):
"""Images over the inline cap don't break the request — the manifest
still carries the path so the agent can read via its file_read tool
without blowing past provider context limits with a 50MB base64 blob."""
from executor_helpers import build_user_content_with_files
monkeypatch.setattr("executor_helpers.MAX_INLINE_ATTACHMENT_BYTES", 10)
big = tmp_path / "big.png"
big.write_bytes(b"x" * 100)
content = build_user_content_with_files("describe", [
{"name": "big.png", "mime_type": "image/png", "path": str(big)},
])
# Image too large → no image_url entry, but the text manifest still mentions it
assert isinstance(content, list)
# Only the text part — the image_url was skipped
assert all(c["type"] == "text" for c in content)
def test_collect_outbound_files_stages_workspace_paths(tmp_path, monkeypatch):
"""Agent reply mentioning a /workspace/… path → each unique existing
file becomes an attachment, staged under chat-uploads. A crafted
reply referencing /etc/passwd must NOT escape."""
from pathlib import Path as _Path
from executor_helpers import collect_outbound_files
# Point the chat-uploads dir and the workspace root at a sandboxed tmp.
# resolve() normalizes macOS /var → /private/var so the helper's
# containment check (which also resolve()s) sees identical prefixes.
ws_root = _Path(str(tmp_path / "workspace"))
ws_root.mkdir()
ws_root = ws_root.resolve()
uploads = ws_root / ".molecule" / "chat-uploads"
uploads.mkdir(parents=True)
monkeypatch.setattr("executor_helpers.WORKSPACE_MOUNT", str(ws_root))
monkeypatch.setattr("executor_helpers.CHAT_UPLOADS_DIR", str(uploads))
# Rebuild the regex against the overridden mount (module caches it)
import re as _re
monkeypatch.setattr(
"executor_helpers._WORKSPACE_PATH_RE",
_re.compile(rf"(?:^|[\s`(\[])({ws_root}/[A-Za-z0-9_./\-]+)"),
)
# A real file inside the fake workspace
report = ws_root / "report.txt"
report.write_text("data")
# A decoy outside the workspace — must be ignored even if mentioned
(tmp_path / "secret.txt").write_text("leaked")
reply = f"Saved to {report} — also see {tmp_path}/secret.txt for extras."
out = collect_outbound_files(reply)
assert len(out) == 1
assert out[0]["name"] == "report.txt"
# Staged copy lives under chat-uploads (the download endpoint's whitelist)
assert out[0]["path"].startswith(str(uploads))
def test_ensure_workspace_writable_chmods_777(tmp_path, monkeypatch):
"""The platform-level hook opens /workspace + chat-uploads to 777 so
agents running as any non-root user can write files the user will
then download. This is the single point of fix for what used to need
a chmod in every template's Dockerfile."""
import stat
from executor_helpers import ensure_workspace_writable
ws = tmp_path / "workspace"
ws.mkdir(mode=0o755)
uploads = ws / ".molecule" / "chat-uploads"
# Don't pre-create uploads — the helper must makedirs it.
monkeypatch.setattr("executor_helpers.WORKSPACE_MOUNT", str(ws))
monkeypatch.setattr("executor_helpers.CHAT_UPLOADS_DIR", str(uploads))
ensure_workspace_writable()
assert uploads.is_dir(), "chat-uploads dir should be created"
assert stat.S_IMODE(ws.stat().st_mode) == 0o777
assert stat.S_IMODE(uploads.stat().st_mode) == 0o777
def test_ensure_workspace_writable_tolerates_non_root(tmp_path, monkeypatch, caplog):
"""When molecule-runtime isn't root (rare CP configurations), the
chmod silently no-ops rather than crashing boot a misconfigured
perm is recoverable; a SystemExit here would wedge the workspace
in provisioning forever."""
import logging
from executor_helpers import ensure_workspace_writable
ws = tmp_path / "workspace"
ws.mkdir()
monkeypatch.setattr("executor_helpers.WORKSPACE_MOUNT", str(ws))
monkeypatch.setattr("executor_helpers.CHAT_UPLOADS_DIR", str(ws / "x"))
def _boom(*_a, **_kw):
raise PermissionError("Operation not permitted")
monkeypatch.setattr("executor_helpers.os.chmod", _boom)
with caplog.at_level(logging.INFO, logger="executor_helpers"):
ensure_workspace_writable() # must not raise
def test_collect_outbound_files_deduplicates(tmp_path, monkeypatch):
"""Reply mentioning the same path twice should only attach once."""
from pathlib import Path as _Path
from executor_helpers import collect_outbound_files
ws_root = _Path(str(tmp_path / "workspace"))
ws_root.mkdir()
ws_root = ws_root.resolve()
uploads = ws_root / ".molecule" / "chat-uploads"
uploads.mkdir(parents=True)
monkeypatch.setattr("executor_helpers.WORKSPACE_MOUNT", str(ws_root))
monkeypatch.setattr("executor_helpers.CHAT_UPLOADS_DIR", str(uploads))
import re as _re
monkeypatch.setattr(
"executor_helpers._WORKSPACE_PATH_RE",
_re.compile(rf"(?:^|[\s`(\[])({ws_root}/[A-Za-z0-9_./\-]+)"),
)
report = ws_root / "report.txt"
report.write_text("data")
reply = f"Wrote {report}. Again at {report}."
out = collect_outbound_files(reply)
assert len(out) == 1