The v2 dropped codex from TEMPLATES on the basis of "no
publish-image.yml = not part of cascade today." That was correct
about the immediate behavior but tripped cascade-list-drift-gate.yml
because manifest.json still declares codex (it IS a live runtime —
referenced from workspace/config.py and cloned into dev envs by
clone-manifest.sh; only the image-publish path is missing).
Restore codex to TEMPLATES (matching manifest) and add a runtime
soft-skip: probe each repo for .github/workflows/publish-image.yml
via the Gitea contents API and skip cleanly if 404. Final job log
distinguishes "complete across all" vs "complete with soft-skips".
This preserves the drift gate's invariant (TEMPLATES == manifest)
while honoring the empirical fact that codex has no publish-image
workflow yet. If codex later gains the workflow, no change here is
needed — the probe will see 200 and the cascade will fan out to it
naturally.
Refs molecule-core#14, molecule-core#20.
Empirical blocker on v1: Gitea 1.22.6 has no repository_dispatch /
workflow_dispatch trigger API (verified across 6 candidate paths in
issuecomment-913). v1's curl-POST loop would always exit-1.
v2 pivots to push-mode: each template repo got a small companion PR
(merged 2026-05-07) adding a `.runtime-version` file at root + a
`resolve-version` job in publish-image.yml that reads the file and
forwards the value to the reusable build workflow. publish-runtime
now updates that file via git-clone + commit + push, which trips
each template's existing `on: push: branches: [main]` trigger.
Behaviour changes vs v1:
- Templates list dropped from 9 → 8 (codex has no publish-image.yml
so was never part of the cascade in practice).
- 3-retry pull-rebase loop per template (handles concurrent-push
races without force-push). Failures collected, job exits 1 with
the failed-template list at the end.
- Idempotency: when re-run with the same version, templates already
pinned to that version contribute zero commits — operator can
safely re-run to retry partial failures.
- Author line: "publish-runtime cascade <publish-runtime@moleculesai
.app>" trailer makes it clear the commit is workflow-driven, not
human (per memory feedback_github_botring_fingerprint).
DISPATCH_TOKEN secret name unchanged (still consumed at
secrets.DISPATCH_TOKEN per 569df259).
Refs molecule-core#14, builds on molecule-core#20 issuecomment-923
(Phase 2 design).
The cascade workflow was reading from `secrets.TEMPLATE_DISPATCH_TOKEN`
but the plumbed secret name is `DISPATCH_TOKEN` (verified just now via
GET /repos/molecule-ai/molecule-core/actions/secrets — only DISPATCH_TOKEN
is set). Without this rename the cascade would always evaluate "secret
missing" and exit 1 on the next push to staging, defeating the entire
point of grant-role-access.sh --apply that just landed.
Three references updated:
- env mapping (`secrets.X` → `secrets.DISPATCH_TOKEN`)
- workflow_dispatch warning text
- push-trigger error text
The bash-side variable name is unchanged (still `DISPATCH_TOKEN`) so
the curl invocation at line 372 is unaffected. YAML round-trip parses
clean.
## Symptom
`publish-runtime.yml::cascade` fired a `repository_dispatch` to 10 workspace-template
repos via direct curl to `https://api.github.com/repos/...`. Post-2026-05-06 the
org's GitHub presence is suspended; every invocation 404s. The job's
`:⚠️:` posture meant the failure didn't propagate, leaving the runtime
PyPI publish → template image rebuild pipeline silently broken.
## Why Option A (rewrite) and not Option B (delete)
Verified 2026-05-07 by devops-engineer (molecule-core#14 thread):
- The cron-poll mechanism (/etc/cron.d/molecule-deploy-poll) tracks ONLY the
Vercel/Railway-deployed repos (landingpage/docs/molecule-app/molecules-market
/molecule-controlplane). It does NOT track workspace-template-* repos.
- Each of the 9 template `publish-image.yml` workflows has
`repository_dispatch: types: [runtime-published]` as a load-bearing trigger.
Without the cascade, when the runtime ships a new PyPI version, templates
don't auto-rebuild.
So Option B (delete) would silently break the runtime → template fan-out.
Option A (rewrite to Gitea's API shape) is the right call. Security-auditor
agreed after seeing the cron-poll TRACKED list.
## API surface change
| Concern | Pre-fix (GitHub) | Post-fix (Gitea) |
|---|---|---|
| URL | `https://api.github.com/repos/$REPO/dispatches` | `${GITEA_URL}/api/v1/repos/$REPO/dispatches` |
| Owner case | `Molecule-AI/...` | `molecule-ai/...` (lowercase, Gitea is case-sensitive) |
| Auth header | `Authorization: Bearer $DISPATCH_TOKEN` | `Authorization: token $DISPATCH_TOKEN` |
| Body shape | `{event_type, client_payload}` | UNCHANGED — Gitea is GitHub-compatible here |
| Success code | `204 No Content` | `204 No Content` (unchanged) |
`GITEA_URL` defaults to `https://git.moleculesai.app`; overridable via job env.
## Out-of-band: DISPATCH_TOKEN secret rotation
The DISPATCH_TOKEN secret was a GitHub PAT. It must be re-minted as a Gitea
PAT for the new API to authenticate. Per saved memory
`feedback_per_agent_gitea_identity_default`, this should be a dedicated
`publish-runtime-bot` persona token with `write:repository` scope on the
9 target repos — NOT the founder PAT.
This PR ships the workflow change. Token rotation is the operator-host
follow-up (security-auditor's lane) — coordinate the merge so the token
is in place before the next runtime release fires.
## Backwards compatibility
The workflow ran silently-broken since 2026-05-06 (every invocation 404
+ :⚠️: but no failure). So there is no functional regression from
"silently broken" to "actually working". Any in-progress operator-managed
manual dispatch path is unaffected; the Gitea API parallel path doesn't
require operator intervention.
## Test plan
- [x] YAML parse OK on the modified workflow file
- [ ] Smoke test: trigger a runtime publish (or simulate via dispatching to one
template) post-merge; verify HTTP 204 + the template's publish-image
workflow fires + the template's image gets re-pushed against the new
runtime version. Phase 4 verification belongs to internal#46 follow-up.
## Hostile self-review (3 weakest spots)
1. The fan-out remains all-or-nothing: a single template failure surfaces as
a `:⚠️:` but PyPI publish proceeds. With 9 templates this is a
~10% per-template chance of stale-image-on-runtime-bump if any one fails.
Defense: the warning shows up in the workflow summary; operators retry.
Future hardening: requeue-on-fail with bounded retry, or a separate
reconcile cron that detects template/runtime version drift and re-dispatches.
2. `DISPATCH_TOKEN` validity is enforced by the Gitea API (401 on stale)
but the workflow doesn't differentiate 401 from 404. Either way the
warning fires. Future hardening: explicit token-shape check at the start
of the cascade job (curl `/api/v1/user` once, fail-fast if 401).
3. Owner-case lowercase is right today but couples the workflow to the
current Gitea org slug. If the org is ever renamed, this workflow
breaks silently. Less fragile alternative: derive REPO from a
canonical config (e.g. `gh repo list molecule-ai`) instead of
string-concatenating. Acceptable today; filed as the same future
hardening pass as item 1.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Mechanical pin: 4 `actions/upload-artifact@v4.6.2/v7.0.1` uses → `@v3`. v4+/v7+
rely on a runtime API shape that Gitea's act_runner v0.6.x doesn't fully
support. v3 uses the legacy server protocol act_runner ships end-to-end.
Files (4 uses):
- .github/workflows/ci.yml:238 (v4.6.2 → v3)
- .github/workflows/codeql.yml:124 (v7.0.1 → v3)
- .github/workflows/e2e-staging-canvas.yml:142 (v7.0.1 → v3)
- .github/workflows/e2e-staging-canvas.yml:150 (v7.0.1 → v3)
YAML parse green on all 3 files.
Sister PRs land for `molecule-controlplane` and `codex-channel-molecule`.
Per internal#46 Phase 2 audit; tracked under that umbrella.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Per documentation-specialist's grep agent (2026-05-07T07:30, see
internal#46): runtime-breaking ghcr.io references in shell scripts +
docker-compose + the slip-past-workflow lint_secret_pattern_drift.py
all need migration. These were missed by security-auditor's
workflow-only audit.
Files (6):
- .github/scripts/lint_secret_pattern_drift.py:40 — workspace-runtime
pre-commit-checks.sh consumer URL: raw.githubusercontent.com →
Gitea raw URL (https://git.moleculesai.app/molecule-ai/.../raw/
branch/main/...). The lint job runs in CI and would 404 today.
- scripts/refresh-workspace-images.sh:54 — workspace-template image
pull URL: ghcr.io → ECR (153263036946.dkr.ecr.us-east-2.amazonaws.com).
- scripts/rollback-latest.sh — full rewrite of header + auth flow:
* ghcr.io/molecule-ai/{platform,platform-tenant} → ECR
* GITHUB_TOKEN with write:packages → AWS ECR auth
(aws ecr get-login-password). Per saved memory
reference_post_suspension_pipeline, prod cutover is to ECR.
* Updated header docs to match new auth flow + prereqs.
- scripts/demo-freeze.sh:13,17 — comment-only ghcr → ECR
(the script doesn't currently exec these URLs, but the comments
describe the cascade and need to match reality).
- docker-compose.yml:215-216 — canvas image: ghcr.io → ECR + updated
the auth comment to describe `aws ecr get-login-password` flow.
- tools/check-template-parity.sh:21 — inline curl install instructions:
raw.githubusercontent.com → Gitea raw URL.
Hostile self-review:
1. rollback-latest.sh's GITHUB_TOKEN→aws-cli auth swap is a behavior
change. Operators using this script now need aws CLI
authenticated for region us-east-2 with ECR pull/push perms.
Documented in updated header. Operators who don't have aws CLI
will get 'aws: command not installed' which is a clear failure
mode (not silent).
2. The Gitea raw URL shape (/raw/branch/main/) differs from GitHub's
raw.githubusercontent.com structure. Verified pattern by
inspecting other Gitea raw URLs in the codebase. If Gitea's URL
changes (1.23+), update via the same one-line edit.
3. Doesn't touch packer/scripts/install-base.sh which has a similar
ghcr.io ref per the grep agent's findings — that's bigger-scope
(packer build pipeline) and lives in molecule-controlplane-ish
territory; filing as parked follow-up under #46 if not already.
Refs: molecule-ai/internal#46, molecule-ai/internal#37,
molecule-ai/internal#38, saved memory reference_post_suspension_pipeline
The molecule-ai-workspace-runtime mirror is regenerated on every
runtime-v* tag from this monorepo's workspace/. Per saved memory
reference_runtime_repo_is_mirror_only, mirror-guard rejects direct
PRs to the mirror; edit at source.
Source-side files that propagate to the mirror's published README +
read by users of the in-monorepo workspace-runtime docs:
- scripts/build_runtime_package.py (the README generator):
* line 281 README_TEMPLATE: 'Shared workspace runtime for Molecule
AI' link → Gitea
* line 399 doc-link to workspace-runtime-package.md → Gitea path
(with /src/branch/main/ shape)
LEFT AS-IS (per Q3 audit-trail decision):
* lines 379, 392 historical issue cross-refs (#2936, #2937)
- workspace/build-all.sh:5 — comment block linking to template-*
repos. Migrated to Gitea path-shape.
- docs/workspace-runtime-package.md:
* lines 101-108 adapter→repo table (8 templates, all PUBLIC on
Gitea) — Gitea URLs
* line 247 starter-repo link — substituted host + added inline
note that starter doesn't survive the suspension migration
(recreation pending; cross-link to this issue)
* line 259 generic git clone command for new templates → Gitea
* line 289 second starter mention — same handling as 247
Files NOT touched in this PR:
- workspace/ Python source code (.py files) — those use github
paths in docstrings + a few log strings; fix bundled with the
cross-repo Go-module-style migration (per #37 Q5 + parked
follow-ups).
- 'Writing a new adapter' section's `gh repo create` command (line
254-256) — gh CLI doesn't talk to Gitea (per #45 parked follow-up).
- 'Writing a new adapter' section's ghcr.io image ref (line 276) —
per #46 ghcr→ECR migration (separate concern).
After this PR merges to staging + a runtime-v* tag is pushed, the
mirror's published README will inherit the Gitea link. Until then
the mirror's README continues to reference github.com/Molecule-AI
(stale but historical-marker-correct since the mirror existed
pre-suspension).
Refs: molecule-ai/internal#41, molecule-ai/internal#37,
molecule-ai/internal#38, molecule-ai/internal#42,
molecule-ai/internal#45, molecule-ai/internal#46
## Symptom
Canvas detail-panel "config + filesystem load" took ~20s. Reported on
production hongming tenant, workspace c7c28c0b-... (Claude Code Agent T2).
## Two stacked latency sources
### 1. Server-side: per-call EIC tunnel setup (~80% of the win)
`workspace-server/internal/handlers/template_files_eic.go::realWithEICTunnel`
performed ssh-keygen + SendSSHPublicKey + open-tunnel + waitForPort PER call.
4 callers (read/write/list/delete) each paid the full ~3-5s setup cost even
when fired back-to-back on the same workspace EC2.
Fix: refcounted pool keyed on instanceID with TTL ≤ 50s (under the 60s
SendSSHPublicKey grant). One tunnel serves N file ops; concurrent acquires
for the same instance share the slot via a pendingSetups gate; LRU eviction
caps simultaneous tracked instances at 32. Poisons entries on tunnel-fatal
errors (connection refused, broken pipe, auth failed) so the next acquire
builds fresh. Cleanup on panic via defer-release pattern (added after
self-review caught a refcount-leak hazard).
Public API unchanged — `var withEICTunnel` rebinds to `pooledWithEICTunnel`
at package init, so all 4 callers inherit pooling for free.
10 unit tests pin: 4-ops-amortise (1 setup), different-instances-do-not-share,
TTL eviction, poison invalidates, concurrent-acquire-single-setup,
TTL=0 escape hatch, LRU eviction at cap, error classification heuristic,
refcount blocks expired eviction, panic poisons entry. All green.
### 2. Canvas-side: serial fan-out + duplicate fetch (~20% of the win)
`canvas/src/components/tabs/ConfigTab.tsx::loadConfig` awaited 3 independent
metadata GETs (`/workspaces/{id}`, `/model`, `/provider`) serially.
`AgentCardSection` fired a SECOND `/workspaces/{id}` from its own useEffect.
Fix: Promise.all over the 3 metadata GETs (each leg keeps its existing
.catch fallback semantics). AgentCardSection now reads `agentCard` from
the canvas store (`useCanvasStore`) instead of refetching — the canvas
already hydrates `node.data.agentCard` from the platform event stream.
Defensive selector handles test mocks without a `nodes` array.
## Verification
- `go test ./internal/handlers/` 5.07s green (full handlers package, including
10 new pool tests)
- `go vet ./internal/handlers/` clean
- `npx vitest run` — 1380/1380 canvas unit tests pass (2 test FILES fail on
a pre-existing xyflow CSS-load issue in vitest config, unrelated to this
change)
- `npx tsc --noEmit` clean
Live wall-time verification deferred to Phase 4 / E2E (canvas browser session
required; external probe blocked by 403 since the canvas auth chain is
session-cookie + Origin header, not a bearer token I can fabricate).
## Backwards compatibility
API surface unchanged. All 4 EIC handler callers use the rebound var; no
caller migration. Pool defaults to enabled (TTL=50s); tests can disable by
setting poolTTL=0 or by overwriting withEICTunnel directly (existing stub
pattern in template_files_eic_dispatch_test.go preserved).
## Hostile self-review (3 weakest spots)
1. `fnErrIndicatesTunnelFault` is a substring grep on err.Error() — the
marker list is hand-curated and ssh client error formats vary across
OpenSSH versions. A future ssh that reports a tunnel failure via a
phrasing not in the list would NOT poison the entry → next callers reuse
a dead tunnel until TTL evicts. Acceptable: TTL bounds the impact (≤50s
of bad reuse), and the heuristic covers every tunnel-error shape that
appears in the existing test fixtures and known incidents.
2. `acquire`'s for-loop has unbounded retry potential under pathological
churn (signal closed → new acquirer → setup fails → repeat). No bounded
retry counter. Today there is no test exercise for "flaky setup that
succeeds-then-fails-then-succeeds"; if observability ever shows this
shape, add a max-retry guard. Filed as a known limitation, not blocking.
3. The substring assertion `strings.Contains` style I used for tunnel-fault
classification could false-positive on app-level error messages that
happen to contain "permission denied" or "broken pipe" verbatim. The
classification test covers the discriminator but only against the
error shapes we know today. Acceptable: poisoning errs on the side of
building fresh, which is correct-but-slightly-slow rather than incorrect.
## Phase 4 / E2E plan
- Live timing of the canvas detail-panel open against a real workspace
(browser session, not external probe).
- Target: perceived latency under 2s on warm pool. Cold open still pays
one tunnel setup (~3-5s) — the pool buys you the SECOND through Nth
panel-open within the TTL window.
- Memory `feedback_chase_verification_to_staging` applies — will not
declare done at PR-merge; will follow through to user-visible behavior
on staging.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes the SSOT story shipped in PR-C/D: canvas now consumes the typed
/chat-history endpoint instead of /activity?type=a2a_receive, and the
server emits messages in display-ready chronological order so the
client doesn't have to re-order them.
## Canvas (consumer migration)
- loadMessagesFromDB swaps from /activity to /chat-history.
- Drops type=a2a_receive + source=canvas params (server applies the
filter centrally now).
- Drops [...activities].reverse() — wire is already display-ready.
- Drops the local INTERNAL_SELF_MESSAGE_PREFIXES constant +
isInternalSelfMessage helper. Server-side IsInternalSelfMessage
applies the same predicate before emitting rows.
- Drops the activityRowToMessages + ActivityRowForHydration imports
from historyHydration.ts. The TS parser stays in tree because
message-parser.ts is still load-bearing for live A2A WebSocket
messages (ChatTab.tsx:805, AgentCommsPanel.tsx, canvas-events.ts).
## Server (row-aware wire-order fix)
The pre-PR-C-2 client did `[...activities].reverse()` over ROWS, then
flattened each row into [user, agent] messages. The reversal was
ROW-aware. After PR-C/D, the server returned a flat ChatMessage slice
in `ORDER BY created_at DESC` order, with [user, agent] within each
row. A naive client-side flat reverse would FLIP each pair (agent
before user at same timestamp).
Two ways to fix it:
A) Server emits oldest-first within page; canvas does NOT reverse.
B) Canvas does row-aware reversal (group by timestamp, reverse).
Option A is cleaner — server owns the wire-order responsibility, every
client trusts `for m of messages` to render chronologically. Server
adds reverseRowChunks() that:
1. Groups consecutive same-Timestamp messages into row chunks
(1-2 messages per row).
2. Reverses the chunk order (newest-row-first → oldest-row-first).
3. Flattens. Within-chunk [user, agent] order is preserved.
Single-message rows (agent reply not yet recorded, attachments-only
user upload) collapse to 1-element chunks and reverse correctly too.
## Tests
Server: 3 new unit tests on reverseRowChunks (paired across rows,
single-message rows, empty input) + 1 sqlmock integration test on
List() that drives the full SQL → reverse → wire path. Mutation-tested:
removed `messages = reverseRowChunks(messages)` from List(), confirmed
the integration test fires red with all 4 misordered indices flagged.
Restored, all 25 messagestore tests + 9 chat-history handler tests
green.
Canvas: 8 lazyHistory pagination tests refactored to mock
/chat-history (not /activity) and assert against the new wire shape
({messages, reached_end} not raw activity rows). All 1389/1389 vitest
tests green; tsc --noEmit clean.
## Three weakest spots (hostile-reviewer self-pass)
1. reverseRowChunks groups by Timestamp string equality. If two
distinct rows had the SAME timestamp (legitimately possible at sub-
millisecond granularity), the algorithm would treat them as one
chunk and not reverse them relative to each other. Mitigated:
activity_logs.created_at uses microsecond resolution; concurrent
inserts at exact-same microsecond are vanishingly rare. If a
collision happens, the within-chunk order is whatever the SQL
returned — both rows render at the same timestamp, no user-visible
misordering.
2. The pre-existing TS parser files (historyHydration.ts +
message-parser.ts) stay in tree. historyHydration.ts is now dead
code (no consumers post-migration); deletion is parked as a follow-
up after a one-week observation window confirms no live-message
consumer reaches it.
3. canvas's loadMessagesFromDB returns `resp.messages ?? []`. If the
server were ever to return `null` instead of `[]` (it currently
doesn't — handler defensively coerces nil to []), the nullish coalesce
keeps the canvas from crashing. A stricter wire schema would assert
the never-null invariant; for today's pragmatic safety, the ?? is
enough.
## Security review
- Untrusted input? Same as PR-C — agent JSON parsed defensively in
the messagestore parser. No new exposure.
- Trust boundary? Same. Canvas → /chat-history → wsAuth → messagestore.
- Output sanitization? Plain text + opaque attachment URIs as before.
No security-relevant changes beyond what /chat-history already
exposes via PR-C. Considered, not skipped.
## Versioning / backwards compat
- /activity endpoint unchanged.
- /chat-history endpoint shape unchanged (still {messages, reached_end});
only the wire ORDER within a page changed (newest-first row → oldest-
first row). Canvas is the only consumer in tree; no API consumers
depend on the previous order.
- canvas's loadMessagesFromDB call signature unchanged — internal
refactor.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
The five `mock.ExpectQuery(\`SELECT id FROM workspaces\`)` sites used a
loose substring regex that silent-passed three regression shapes #2872
called out:
1. `WHERE parent_id = $2` (drops `IS NOT DISTINCT FROM` — breaks
NULL-parent root matching)
2. `WHERE name = $1` only (drops parent_id check entirely — hijacks
siblings of the same name across different parents)
3. Drops `AND status != 'removed'` (blocks re-import after Collapse)
Extracts a `lookupChildSQLRE` const that anchors all four load-bearing
tokens (the SELECT/FROM, the name predicate, the IS NOT DISTINCT FROM
predicate, and the status filter). All five ExpectQuery sites now use
the same const so a future schema/predicate change fails one place.
Mutation-tested per memory feedback_assert_exact_not_substring.md:
- Replacing `IS NOT DISTINCT FROM` with `=` fails
TestLookupExistingChild_NilParent_MatchesRoot.
- Dropping `AND status != 'removed'` fails
TestLookupExistingChild_Found_ReturnsIDAndTrue.
Note: #2872 PR-A (AST gate strengthening) is already addressed inline —
findWorkspacesInsertSQL + TestCreateWorkspaceTree_InsertUsesOnConflictDoNothing
pin the ON CONFLICT DO NOTHING shape, which is a strictly stronger
gate than the original lookup-before-insert ordering check.
The deprovision path marks `workspaces.status='removed'` BEFORE calling
the controlplane DELETE. If that CP call fails (transient 5xx, network
hiccup, AWS provider error), the DB row stays at 'removed' with
`instance_id` populated and there's no retry — the EC2 lives forever.
9 prod orphans accumulated over 3 days under this bug.
Adds a SaaS-mode counterpart to the existing Docker `orphan_sweeper`:
- 60s tick (matches the Docker sweeper cadence)
- LIMIT 100 per cycle so a sustained CP outage drains over multiple
cycles without blowing the request timeout
- Re-issues `cpProv.Stop` for any workspace at status='removed' with a
non-NULL `instance_id`. Stop is idempotent (AWS terminate on
already-terminated is a no-op; CP's Deprovision tolerates already-
deleted DNS) so retries are safe.
- On Stop success, NULLs `instance_id` so the next cycle skips the row.
- On Stop failure, leaves `instance_id` populated for next cycle.
The existing Docker sweeper is gated on `prov != nil`; the new sweeper
is gated on `cpProv != nil`. SaaS tenants get exactly one of the two,
self-hosted tenants get the Docker one — no overlap.
Why this shape over option A (CP-first ordering) or B (durable outbox):
the existing inline path already returns a loud 500 to the user when
CP fails — the only missing piece is automatic retry, which a 60s
sweeper provides without protocol changes, new tables, or new workers.
~30 LOC of production code vs. ~400 for an outbox. RFC discussion in
#2989 comment chain.
Tests:
- 9 unit tests covering happy path, Stop failure, UPDATE failure,
multiple orphans (one-fails-others-still-process), DB query error,
nil-DB defense, nil-reaper short-circuit, and the boot-immediate-then-
tick cadence contract.
- Mutation-tested: status='running' substitution and removed-UPDATE-
block both fail at least one test.
Out of scope:
- Backfilling the 9 named orphans — they'll heal automatically on the
first sweep cycle after this lands; no manual cleanup needed.
- Long-term durable-outbox architecture — separate RFC.
Allows MOLECULE_IMAGE_REGISTRY env override on the tenant workspace-server. Used to flip from ghcr.io/molecule-ai → private ECR mirror after the GitHub org suspension on 2026-05-06. Default unchanged for OSS users.
Closes#6.
Add MOLECULE_IMAGE_REGISTRY env var to override the registry prefix used
by all workspace-template image references. Defaults to ghcr.io/molecule-ai
(unchanged for OSS users); set to an ECR URI in production tenants when
mirroring to AWS.
Why this matters: GitHub suspended the Molecule-AI org on 2026-05-06 with
no warning. Production tenants kept running because they had images cached
locally, but any tenant restart (AWS health event, redeploy, OS reboot)
would have failed at `docker pull ghcr.io/molecule-ai/...` because GHCR
returned 401. This change introduces the seam needed to point new pulls at
a registry we control (AWS ECR) by flipping a single env var on Railway.
Design (RFC: molecule-ai/internal#6):
- New `RegistryPrefix()` function in `provisioner/registry.go` reads
MOLECULE_IMAGE_REGISTRY, falls back to "ghcr.io/molecule-ai".
- New `RuntimeImage(runtime)` returns the canonical ref using the prefix.
- `RuntimeImages` map computed at init via `computeRuntimeImages()` so
existing callers that range over it still work.
- `DefaultImage` likewise computed via `RuntimeImage(defaultRuntime)`.
- `handlers.TemplateImageRef()` switched from hardcoded format string to
`provisioner.RegistryPrefix()`.
- `runtime_image_pin.go::resolveRuntimeImage()` automatically inherits
the prefix change because it reads from `provisioner.RuntimeImages[]`
and only re-formats the tag suffix to a digest pin.
Alternatives rejected (see RFC):
- Multi-registry fallback chain (try ECR, fall back to GHCR): GHCR is
locked from outbound for our org, so the fallback never works for us.
Adds code complexity for no benefit.
- Hardcoded ECR-only switch: couples production code to a specific
deployment environment. OSS users self-hosting Molecule would need
the upstream GHCR.
- Self-hosted Harbor / registry-on-Hetzner: adds a component to operate.
Not justified at 3-tenant scale; AWS ECR is mature and IAM-integrated.
Auth — deliberately NOT changed in this commit:
- For GHCR, the existing `ghcrAuthHeader()` reads GHCR_USER/GHCR_TOKEN.
- For ECR, EC2 user-data installs `amazon-ecr-credential-helper` and adds
a `credHelpers` entry in `~/.docker/config.json` so the daemon resolves
ECR credentials via the EC2 instance role on every pull. The Go code
needs no auth change. This keeps the diff minimal.
Backwards compatibility:
- Additive: env unset → identical behavior to today (GHCR).
- Existing tests reference literal `ghcr.io/molecule-ai/...` strings;
they continue to pass under the default prefix.
- `RuntimeImages` map preserved for callers that iterate it.
- No interface, schema, API, or migration version bump needed.
Security review:
- No untrusted input: MOLECULE_IMAGE_REGISTRY is set at deploy time
(Railway env, EC2 user-data), not by users.
- No expanded data collection or logging changes.
- No new permissions: ECR pull permission is a future user-data + IAM
role change, separate from this code change.
- Worst-case: an attacker who already compromises Railway can swap the
registry prefix to a malicious URI — same blast radius as compromising
Railway today, no expansion.
Tests:
- 9 new unit tests in `registry_test.go` covering: default fallback,
env override, empty env, all 9 known runtimes, unknown runtime,
override-applies-to-all, computeRuntimeImages map population, env
reflection, alphabetical ordering pin.
- All existing provisioner + handlers tests continue to pass.
- Mutation-tested mentally: deleting `if v := os.Getenv(...)` makes
TestRegistryPrefix_RespectsEnv fail. Deleting `for _, r := range
knownRuntimes` makes TestRuntimeImage_AllKnownRuntimes fail. The test
suite would catch a regression of the original failure mode.
Rollout plan: this PR is safe to merge with no env change. Production
cutover happens by setting MOLECULE_IMAGE_REGISTRY on Railway after
the AWS ECR mirror is populated (separate ops change, tracked in
issue #6 phases 3b–3f).
Tracking:
- RFC: molecule-ai/internal#6
- Tasks: #97 (ECR setup), #98 (CP fallback)
- Tech debt: runbooks/hetzner-rollout-tech-debt-2026-05-06.md item 7
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes#3026. Final piece of RFC #2945.
## What's new
New package internal/messagestore/ holds:
- MessageStore interface — single read-side contract operators
implement to plug in alternative chat-history backends.
- ChatMessage / ChatAttachment / ListOptions types — canonical data
shapes returned by any impl, mirrors canvas's TS ChatMessage.
- PostgresMessageStore — platform-default impl wrapping the
activity_logs query + A2A-envelope parser ported in PR-C.
Behavior is byte-identical to the pre-PR-D handler.
## What moves
The activity_logs query, the parser (activityRowToChatMessages,
extractRequestText, extractChatResponseText, extractFilesFromTask,
etc.), and the internal-self-message predicate all migrate from
internal/handlers/chat_history.go into the new package. handlers/
chat_history.go becomes a thin HTTP-shape adapter:
parse query params → store.List(ctx, workspaceID, opts) → emit JSON
Compile-time interface assertion in postgres_store.go catches future
drift if the interface evolves and the impl falls behind.
## Why this PR
OSS operators wanting to:
- Tier hot/warm/cold storage (recent in Postgres, archival in S3)
- Use a vector store with hybrid search (Pinecone, Weaviate)
- Run an in-memory store for ephemeral test environments
- Federate history across regions
…had no extension point — they'd have to fork the handler. This PR
makes that a constructor swap at router.go.
## Tests
Parser-level (22 tests, MOVED to internal/messagestore/postgres_
store_test.go): every TS test case in
canvas/src/components/tabs/chat/__tests__/historyHydration.test.ts
has a Go counterpart. Timestamp preservation, user/agent extraction,
internal-self filter, role decision (status=error vs agent-error
prefix), v0/v1 file shapes, malformed JSON resilience.
Handler-level (9 NEW tests in internal/handlers/chat_history_test.go):
thin adapter coverage using a fake MessageStore. UUID validation,
before_ts RFC3339 validation, default limit, max-limit clamp,
invalid-limit fallback, before_ts passthrough, empty-array (not
null) JSON shape, attachment shape preservation, store-error → 502
mapping.
Compile-time interface conformance: PostgresMessageStore satisfies
MessageStore, fakeStore (test fake) satisfies MessageStore.
Mutation-tested. Removed UUID validation in the handler; confirmed
TestChatHistoryHandler_RejectsNonUUIDWorkspaceID fires red (status
200 instead of 400, non-UUID reaches the store). Restored, all
green.
Full handlers + messagestore + router test runs green; full repo
go test ./... green.
## SSOT decision
ChatMessage / ChatAttachment / parser / DB query all live in
internal/messagestore/ ONLY. handlers/chat_history.go imports the
package and uses the types via messagestore.ChatMessage etc. — no
re-declaration anywhere.
## Three weakest spots (hostile-reviewer self-pass)
1. The internal-self prefix list (Delegation results are ready...) is
a package var in messagestore/postgres_store.go. A future impl
that wants to override the predicate must reach into the package
to use IsInternalSelfMessage or define its own. Acceptable: the
predicate is part of the contract; if an impl wants different
semantics it owns that decision explicitly.
2. ListOptions has Limit + BeforeTS + HasBefore; future paging needs
(after_ts, peer_id filter, role filter) require additive struct
field additions, which is a soft API break for any impl that
handles ListOptions positionally. Mitigated by Go's struct-literal
convention (named fields by default); also flagged in the
interface comment for impl authors.
3. The handler does NOT log when a store returns an error — it just
maps to 502. An impl that wants to surface its error class up the
stack can't, today. If/when an impl needs that, the interface can
add a typed-error contract in a follow-up. Today's coverage is
sufficient: most ops issues land in the store impl's own logs.
## Security review
- Untrusted input? Same as PR-C — agent-emitted JSON parsed
defensively. New fakeStore in tests can't reach production.
- Trust boundary? Same. Interface lives BEHIND wsAuth; impls only
see workspace IDs already authenticated.
- Auth/authz? Inherited from handler; the interface doesn't
authenticate.
- PII / secrets in logs? Documented in the interface contract:
impls MUST NOT log full message bodies / attachment URIs. The
Postgres impl logs nothing on the happy path.
- Output sanitization? Same plain-text + opaque-URI surface as
PR-C. Canvas validates attachment-URI schemes.
No security-relevant changes beyond what /chat-history already
exposes via PR-C. Considered, not skipped.
## Versioning / backwards compat
- New internal package. Zero public API change.
- Single caller site in router.go updated (one-line constructor
change). NewChatHistoryHandler() → NewChatHistoryHandler(store).
- No schema change, no migration.
- Existing /chat-history endpoint unchanged on the wire — clients
don't notice the refactor.
## Phasing
This is the final RFC #2945 piece. Follow-ups parked:
- PR-C-2 (canvas migration): swap canvas loadMessagesFromDB to call
/chat-history instead of /activity. Independent of this PR;
blocked only by canvas team's calendar.
- Sample alternative impls (S3, in-memory) for OSS docs: separate
PR when the first OSS consumer materializes; demonstration code
untested against a real workload is anti-pattern.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Closes the SSOT gap for chat-history hydration: today every consumer
(canvas TS) re-implements an A2A-envelope walk to map activity_logs
rows into rendered ChatMessage objects. This PR moves that walk into
the server.
## What's added
GET /workspaces/:id/chat-history?limit=N&before_ts=T
Returns:
{
"messages": [
{"id": "<uuid>", "role": "user"|"agent"|"system",
"content": "...", "attachments": [...], "timestamp": "<RFC3339>"}
],
"reached_end": false
}
Auth chain: same wsAuth as /workspaces/:id/activity (tenant ADMIN_TOKEN
+ X-Molecule-Org-Id). No new trust boundary.
Filter: a2a_receive rows with source_id IS NULL — same canvas-source
filter the canvas applies via /activity?type=a2a_receive&source=canvas,
centralized so future API consumers don't need to know it.
## What's mirrored from canvas TS
Direct port of canvas/src/components/tabs/chat/historyHydration.ts
+ message-parser.ts:
- extractRequestText / extractFilesFromUserMessage — user-side parts
walk through request_body.params.message.parts[]
- extractChatResponseText — agent-side response_body collector across
the four shapes (string, A2A JSON-RPC parts, older nested
parts.root.text, task artifacts) joined with "\n" (matches canvas
multi-source collector — claude-code emits multiple text parts;
hermes emits summary+artifacts)
- extractFilesFromResponse / extractFilesFromTask — file walk across
parts[] + artifacts[].parts[] + status.message.parts[] +
message.parts[]
- v0 hot path ({kind:"file", file:{...}}) AND v1 protobuf flat shape
({url, filename, mediaType}) both supported
- Role decision: status='error' OR text starts with "agent error"
(case-insensitive) → "system", else "agent"
- isInternalSelfMessage prefix filter (Delegation results are
ready...)
- Timestamp pinned to row.created_at (regression cover for
2026-04-25 bubble-collapse bug)
## Tests
22 unit tests in chat_history_test.go, every TS test case in
historyHydration.test.ts has a Go counterpart:
Timestamp preservation (3): user/agent pin to created_at, two-rows
produce two distinct timestamps.
User-message extraction (5): text-only, internal-self skip,
null body, attachments hydrated, attachments-only-when-text-empty,
internal-self suppresses even with attachments.
Agent-message extraction (4): result-string, status=error→system,
agent-error-prefix→system, response_body.parts attachments,
null body, no-text-no-files-no-bubble.
End-to-end (1): paired user+agent same timestamp.
Go-specific (5): malformed JSON returns empty (no panic), v1
protobuf flat shape extraction, task-artifacts extraction, older
nested root.text shape, basename helper edge cases.
isInternalSelfMessage predicate (1): prefix match, non-prefix non-
match, empty-text non-match.
Mutation-tested. Removed the role-promotion branch (status=error +
agent-error prefix → system); confirmed both
TestChatHistory_RoleSystemWhenStatusError and
TestChatHistory_RoleSystemWhenAgentErrorPrefix fire red. Restored.
Both green.
Full handlers test suite (4.3s) green; full repo `go test ./...` green.
## SSOT decision
Parsing logic lives in workspace-server/internal/handlers/chat_history.go
ONLY. Canvas keeps historyHydration.ts + message-parser.ts during the
transition because:
- PR-C-2 (follow-up): canvas loadMessagesFromDB swaps to new
endpoint. Today's canvas still calls /activity for backward
compatibility.
- The TS parsers are still load-bearing for LIVE message handling
(WebSocket A2A_RESPONSE events) until RFC #2945 PR-B-2 mirrors
the typed event payloads to canvas consumers.
Canvas's TS path will be deleted in a separate PR after a one-week
observation window confirms no live-message consumers depend on it.
## Security review
- Untrusted input? YES — request_body and response_body come from
agents (potentially OSS / third-party). Defensive: any malformed
JSON returns empty content + no attachments, no panic. Tested
via TestChatHistory_MalformedJSONInRequestBodyReturnsEmpty.
- Trust boundary? Same as today: agent → workspace-server.
No new boundary; reuses existing wsAuth middleware.
- Auth/authz? Inherits wsAuth chain. Cross-workspace access blocked
by existing TenantGuard middleware.
- PII / secrets in logs? None. The handler logs nothing on the
happy path; errors log 502 without body content.
- Output sanitization? ChatMessage.content is plain text returned
as-is; canvas already sanitizes via ReactMarkdown. Attachment
URIs are agent-provided (workspace: / platform-pending: /
https:); canvas's existing scheme allow-list still applies.
## Versioning / backwards compatibility
- New endpoint /chat-history. /activity unchanged.
- Canvas historyHydration.ts + message-parser.ts intact during
transition (will be removed in PR-C-2 follow-up).
- No public API consumer of /activity is broken — added route is
additive.
- No semver bump (server is internal versioning).
## Three weakest spots (hostile-reviewer self-pass)
1. extractRequestText returns ONLY parts[0].text. If a user message
contains multiple text parts (uncommon — canvas only ever emits
one), we lose later parts. Matches canvas exactly today, but a
future change that emits multi-text user messages needs both
parsers updated. Documented in code; covered by test if/when
added.
2. activityRowToChatMessages rebuilds ChatMessage IDs every call (no
caching). Each chat reload mints fresh UUIDs. This is fine because
canvas dedupes by (role, content, timestamp window) not id, but a
future API consumer that DID rely on id stability would break.
Documented in the ChatMessage struct comment.
3. The handler scopes to source_id IS NULL only (canvas-source rows).
A future "show all messages, including agent-to-agent" mode would
need a new endpoint or a parameter. Out of scope for PR-C; canvas's
/activity?source=canvas already enforces the same filter.
Closes#3017. Unblocks RFC #2945 PR-D (MessageStore interface) which
returns []ChatMessage typed values.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes#2962.
## Why
Six per-package `truncate` helpers had drifted into independent
re-implementations of the same idea. Three of them (delegation.go,
memory/client/client.go, memory-backfill/verify.go) used
`s[:max] + "…"` byte-slice form, which on a multi-byte codepoint at
byte `max` produces invalid UTF-8 → Postgres `text`/`jsonb` rejects
the INSERT silently → `delegation` / `activity_logs` row never lands
→ audit gap.
Three other helpers (delegation_ledger.go #2962, agent_message_writer.go
#2959, scheduler.go #2026) had each been fixed in isolation with three
slightly different rune-safe shapes — confirming this is a class of
bug, not a single instance.
## What
New package `internal/textutil` with three rune-safe functions:
- `TruncateBytes(s, maxBytes)` — byte-cap, "…" marker. Used by 5
callers writing into byte-bounded columns / log lines.
- `TruncateBytesNoMarker(s, maxBytes)` — byte-cap, no marker. Used by
delegation_ledger.go where the storage already conveys "preview"
and an extra ellipsis would push the result over the column cap.
- `TruncateRunes(s, maxRunes)` — rune-cap, "…" marker. Used by
agent_message_writer.go where the cap is in display chars (UI
summary), not bytes.
All three guarantee `utf8.ValidString(out)` for any `utf8.ValidString(in)`.
Inputs already invalid go through `sanitizeUTF8` at the call site
boundary (scheduler.go preserved this defense-in-depth).
## Migration map
| Old | New | Behavior change |
|---|---|---|
| `delegation_ledger.truncatePreview` | `textutil.TruncateBytesNoMarker(s, 4096)` | none |
| `agent_message_writer.truncatePreviewRunes` | `textutil.TruncateRunes(s, n)` | none |
| `scheduler.truncate` | `textutil.TruncateBytes(s, n)` | "..." → "…" (3 bytes either way; single-glyph display) |
| `delegation.truncate` | `textutil.TruncateBytes(s, n)` | bug fix + ellipsis swap |
| `memory/client.truncate` | `textutil.TruncateBytes(s, n)` | bug fix |
| `memory-backfill.truncate` | `textutil.TruncateBytes(s, n)` | bug fix |
Five separate `truncate*` helpers + their per-package tests removed.
Net: 12 files / +427 / -255.
## Tests
- `internal/textutil/truncate_test.go` — 27 table-test cases + 145
fuzz-invariant cases asserting `utf8.ValidString` and byte-cap
invariants on every output.
- `delegation_ledger_test.go TestLedgerInsert_TruncatesOversizedPreview`
strengthened with `capValidUTF8Matcher` so the SQL-write argument
is asserted to be valid UTF-8 + within cap (not just `AnyArg()`).
Mutation-tested: replacing the SSOT call with byte-slice form makes
this test fail loud.
## Compatibility
- All callers internal; no external API surface change.
- Ellipsis swap "..." → "…": same byte budget (3 bytes), single-glyph
display. No alerting/grep on either marker in this codebase
(verified). Canvas renders both correctly.
- DB column widths unchanged (4096 / 80 / 200 / 256 / 300 — all
preserved in the migrations).
## Security
Fixes a silent INSERT-failure mode that hid `activity_logs` /
`delegations` rows containing peer-controlled text. The class of input
that triggered it (CJK, emoji, accented Latin) is normal user content,
not malicious — but the symptom (audit gap) makes incident
reconstruction harder. Helper is pure-function over `string`; no
secrets / PII / auth handling involved. Untrusted input is handled
identically to before, just rune-aligned now.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The migration-replay step globbed only *.up.sql, silently skipping
the older flat-naming migrations (001_workspaces.sql,
009_activity_logs.sql, etc.). Fine while no integration test
depended on those tables; broke when the #149 cross-table
atomicity test came in needing both workspaces (FK target for
activity_logs) and activity_logs themselves.
Switch to globbing *.sql + sorted lex-order, excluding *.down.sql
so up/down pairs don't undo themselves mid-run. Add a sanity check
for workspaces + activity_logs + pending_uploads alongside the
existing delegations gate so a future migration drift fails loud
instead of silently skipping the regressed test.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds two real-Postgres tests under //go:build integration:
- TestIntegration_PollUpload_AtomicRollback_AcrossBothTables exercises
the helpers in the same Tx shape uploadPollMode does (PutBatchTx +
LogActivityTx + Rollback) and asserts COUNT(*)=0 on BOTH
pending_uploads AND activity_logs after the rollback. Failure
injection: NUL byte in `summary` triggers lib/pq protocol rejection
on the second activity insert — same trick the existing PutBatch
AtomicRollback test uses.
- TestIntegration_PollUpload_HappyPath_AcrossBothTables is the positive
counterpart — Commit lands N rows in both tables.
Coverage rationale (post-PR-3010 review):
- sqlmock unit test (TestPollUpload_AtomicRollbackOnActivityInsertFailure)
proved the handler calls Begin/Exec/Exec-fail/Rollback in order.
- Existing PutBatch integration test proved Postgres honors rollback
for pending_uploads alone.
- New tests close the cross-table gap: prove LogActivityTx + PutBatchTx
+ real Postgres MVCC compose correctly under rollback.
A regression that made LogActivityTx silently route through db.DB
instead of the passed tx would still pass the sqlmock test (the
Begin/Commit/Rollback shape would look right) but would fail this
integration test (the activity_logs row would survive the rollback).
Verified locally: postgres:15-alpine + all migrations applied, both
tests pass in 0.1s. Skips cleanly without INTEGRATION_DB_URL — CI
already runs this file via the Handlers Postgres Integration job.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
## Bug
`/org/import` had no per-tenant mutex, advisory lock, or DB-level
uniqueness on (parent_id, name). The pattern was lookup-then-insert:
existingID, existing, err := h.lookupExistingChild(...) // SELECT
if existing { return /* skip */ }
db.DB.ExecContext(ctx, `INSERT INTO workspaces ...`) // INSERT
Two concurrent admin POSTs (rapid double-click in canvas, retry-after-
timeout, two operators on the same template) both saw "not found" in
the SELECT and both INSERT'd the same (parent_id, name).
Captured impact: tenant-hongming accumulated 72 stale child workspaces
in 4 days from repeated org-template spawns of the same template
(see #2857 phase 4 sweeper for the cleanup; #2872 for the prevention RFC).
## Fix
Two-layer fix — DB-level backstop AND application-level happy path:
1. **Migration** `20260506000000_workspaces_unique_parent_name.up.sql`
```sql
CREATE UNIQUE INDEX CONCURRENTLY IF NOT EXISTS workspaces_parent_name_uniq
ON workspaces (
COALESCE(parent_id, '00000000-0000-0000-0000-000000000000'::uuid),
name
)
WHERE status != 'removed';
```
* COALESCE(parent_id, sentinel) collapses NULLs so root workspaces
also collide pairwise.
* `WHERE status != 'removed'` lets a tombstoned row be replaced
by a same-named re-import (preserves existing org-import semantics).
* CONCURRENTLY avoids ACCESS EXCLUSIVE on production tenants under
live traffic; IF NOT EXISTS makes the migration resumable.
* Down migration drops CONCURRENTLY symmetrically.
2. **`org_import.go` swap**
Replace lookup-then-insert with `INSERT ... ON CONFLICT DO NOTHING
RETURNING id`. On the skip path (RETURNING returns 0 rows →
sql.ErrNoRows), re-select the existing id to recurse children:
INSERT INTO workspaces (...) VALUES (...)
ON CONFLICT (COALESCE(parent_id, ...), name)
WHERE status != 'removed'
DO NOTHING
RETURNING id;
The ON CONFLICT target predicate matches the partial-index predicate
exactly — required for Postgres to consider the index applicable.
Existing `lookupExistingChild` helper kept (still used on the skip
path); semantics unchanged.
## Test coverage
* AST gate refreshed to assert the workspaces INSERT contains the
ON CONFLICT pattern (`onConflictDoNothingRE`) instead of the now-obsolete
"lookup-before-insert" ordering. Per behavior-based gating
(memory: feedback_behavior_based_ast_gates.md), the new gate pins
the actual TOCTOU-resolution behavior.
* Companion `TestGate_FailsWhenInsertOmitsOnConflict` proves the gate
catches the bug shape on synthetic source.
* All existing `lookupExistingChild` unit tests (no-rows, found,
nil-parent, DB error, wrapped no-rows) still pass — helper is
unchanged and still load-bearing on the skip path.
* Live Postgres E2E coverage runs via the existing
"Handlers Postgres Integration" CI job, which applies migrations
to a real PG and exercises the INSERT path.
## Why ship the migration + swap together (not stacked)
The migration alone provides a DB-level backstop, but without the
handler swap a UNIQUE-violation surfaces as a 500 to the user. The
handler swap alone has no enforceable target until the migration
applies. Shipped together they give graceful skip + atomic backstop.
Migration is CONCURRENTLY + IF NOT EXISTS, safe to apply even on
tenants where the sweeper (#2860) hasn't run yet — the index just
declines to build until conflicting rows are reconciled.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Closes#149.
uploadPollMode for poll-mode chat uploads previously committed N
pending_uploads rows in one Tx (PutBatch), then wrote N activity_logs
rows individually outside any Tx. A per-row failure on activity row K
left rows 1..K-1 committed and pending_uploads orphaned until the 24h
TTL — not data-loss because the platform's fetcher handled the
half-state cleanly, but the user never saw file K in the canvas and
the inconsistency surfaced as an "uploaded but invisible" complaint
class.
Thread one Tx through PutBatchTx + N × LogActivityTx + Commit so all
or none commit. Broadcasts are deferred until after Commit — emitting
an ACTIVITY_LOGGED event for a row that ends up rolled back would
paint a ghost message into the canvas's optimistic UI. A new
LogActivityTx returns a commitHook the caller invokes post-Commit;
the existing fire-and-forget LogActivity is unchanged for the 4 other
production callers (a2a_proxy_helpers + activity.go report path).
Storage interface gains PutBatchTx; PostgresStorage.PutBatch is
refactored to share the validation + insert path. inMemStorage and
fakeSweepStorage delegate or no-op for PutBatchTx (the in-mem fake
can't model Tx state — DB-level atomicity is verified by the existing
real-Postgres integration test for PutBatch + the new unit test
asserting the Go handler calls Rollback on activity-insert failure).
Tests:
- TestPollUpload_AtomicRollbackOnActivityInsertFailure pins the new
contract via sqlmock — second activity insert errors → Rollback
expected, Commit must NOT be called.
- TestLogActivityTx_DefersBroadcastUntilCommitHook +
_InsertError_NoHook_NoBroadcast + _NilTx_Errors cover the new API.
- TestPutBatchTx_HappyPath / _EmptyItems / _ValidationFails /
_PerRowErrorPropagates cover Tx-aware storage layer.
- 7 existing TestPollUpload_* tests updated to mock Begin + Commit
(or Begin + Rollback for failure paths) since the handler now
opens a Tx around PutBatch + activity inserts.
All workspace-server tests pass; integration tag also clean.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
github-code-quality bot flagged this as the last unresolved review thread
blocking the merge queue. The function is referenced in comments but
never called from this file (download is dispatched via the lightbox /
AttachmentChip path). Removing the import resolves the bot thread and
clears the staging branch-protection 'all conversations resolved' gate.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
User asked for VSCode-style drag-drop upload (#2999): "drag local to
upload to target folder just like vscode does". Today the only upload
path is the toolbar's Upload button (folder picker). Drag-drop lets
users grab files from Finder/Explorer and drop them directly on a
specific subdirectory in the tree.
1. New `uploadDataTransferItems(items, targetDir)` in `useFilesApi`
— walks the HTML5 DataTransferItemList via `webkitGetAsEntry()`,
recursing folders to a flat (relativePath, file) list, then PUTs
each via the existing /files/<path> endpoint. The walker (also
exported via `__testables`) calls `readEntries()` in a loop until
empty so multi-batch folders (browsers cap each call at ~100
entries) aren't silently truncated.
2. `uploadFiles` (folder-picker path) gained an optional `targetDir`
parameter. Same prefixing semantics so future surfaces (e.g. an
"upload here" toolbar button on a row) can reuse it.
3. `FileTree` directory rows gained `onDragOver` / `onDragEnter` /
`onDragLeave` / `onDrop` handlers + a hover-target highlight
(accent-tinted background + outline). dragLeave uses
`currentTarget.contains(relatedTarget)` to suppress the flicker
that fires when the cursor crosses any child of the row (icon,
label, ✕ button) — without this the highlight strobes on every
sub-element transition.
4. `FilesTab` wraps the tree column in an outer drop zone for
"drop on root" — drops outside any specific subdir row land at
root. The empty-state placeholder copy now includes a
"drag files here to upload" hint when the active root is
/configs (the only writable root today).
5. Both the row drop and the root drop are gated on
`root === "/configs"` (the same gate that already blocks the
toolbar's New / Upload / Clear). Other roots ignore the drag
entirely (no highlight, no drop), so the user doesn't get a
misleading drag affordance followed by a "switch root" toast.
`dragDropUpload.test.tsx` (9 tests, two layers):
Walker tests (pure function, no DOM):
- `walkEntry` collects a single dropped file with correct relpath.
- `walkEntry` walks a folder + preserves folder name in the path.
- **Multi-batch loop**: a fake reader that emits two batches of 2
+ an empty terminator must yield 4 files. A walker that called
readEntries once would see only 2 — this is the load-bearing
assertion against silent folder truncation.
- Nested directories: outer/inner/file.md → "outer/inner/file.md".
FileTree drag-drop wiring (DOM):
- `dragover` on a directory row preventDefault's (load-bearing —
without it the drop event never fires).
- `drop` on a directory row fires `onDropToTarget(path, items)`.
- `drop` on a FILE row does NOT fire (only directories are valid
drop targets).
- `drop` with no DataTransferItems does NOT fire (defensive guard
against text-only drags).
- `dragenter` adds the highlight class to the directory row.
1. The 1MB per-file size cap is inherited from the existing
`uploadFiles`. A user dropping a 5MB skill bundle silently
skips the file (the loop's `continue` on `file.size >
1_000_000`). Same behavior as the toolbar Upload, so consistent
if not great. Surfacing skipped-files would be a UX improvement
tracked separately — not load-bearing for this PR.
2. Drop-zone highlight on the column wrapper uses an outline that
sits inside the column's overflow-y-auto scroll container. If
the user drags onto a row that's mid-scroll, the highlight may
clip slightly at the scroll boundary. Cosmetic only; the drop
still works.
3. The `?root=` query is NOT passed on the underlying writeFile
call (matches the existing uploadFiles behavior). On a backend
without #2999 PR-A, this means uploads always land in /configs
regardless of selected root — but we already gated drop on
`root === "/configs"` so the practical effect is nil today.
Once PR-A merges and the canvas threads ?root= through writes
(separate follow-up), drops on /home etc. would be enableable
by lifting the canDelete-style gate.
- `npx tsc --noEmit` clean
- 177/177 canvas tab tests pass
- Manual on local dev: drag a file from Finder onto /configs/skills
row → file appears under /configs/skills/<name>. Drag a folder of
3 files onto root area → 3 files uploaded with folder structure
preserved. Drag onto /home tree → no highlight, no drop.
Refs #2999. Pairs with PR-A (backend EIC) — without PR-A the tree
is empty on SaaS and there's nothing to drop ONTO; PR-D still works
on self-hosted today.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Adds two new arms to the AttachmentPreview kind dispatcher:
* PDF — chip in the bubble, click opens the shared AttachmentLightbox
with a browser-native <embed type="application/pdf"> at 95vw/90vh.
Fetch+Blob+ObjectURL auth path matches AttachmentImage / Video. PDF.js
not pulled in; browser viewer is good enough for the desktop chat MVP
(Slack/Linear/Notion all gate full-page PDF behind a click for the
same reason). Falls back to AttachmentChip on fetch error.
* Text/code/JSON/YAML — first 10 lines in monospace <pre><code> right
in the bubble, "Show all N lines" expands to full content, with a
filename + ⬇ download header. Streams up to 256 KB then marks
truncated and offers a download chip; large logs don't crash the
bubble. No syntax highlighting in v1 — shiki adds 200-500 KB and is
pure polish.
Coverage: 5 new dispatch tests (PDF success → embed in lightbox,
PDF fetch fail → chip fallback, text inline render, text long content
→ Show-all-N-lines expand button, text fetch fail → chip fallback).
All 19 AttachmentPreview tests pass; tsc --noEmit clean.
Stacked on rfc-2991-pr-1-image-preview-lightbox (PR-2 already merged
into PR-1's branch). PR-1 ships first; this rebases onto staging
once it lands.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
## Why
User asked for a VSCode-style right-click menu on file rows (#2999):
"right click to have a menu to download". Today the only download
affordance is the toolbar's Export-all (bulk JSON dump), and the
inline ✕ button is the only delete UX (small click target, easy to
miss).
## Fix
1. New `FileTreeContextMenu` component — fixed-position popover with
Open / Download / Delete items composed per-row (files get all
three; directories get Delete only since "open a directory in the
editor" doesn't apply). Esc + outside-click + Tab + scroll
dismiss. ↓/↑ arrow keys rove focus between menu items. role=menu
+ role=menuitem + autofocus on first item for a11y.
2. Menu state lifted to the top-level `FileTree` (not per-row) so
opening a second row's menu auto-closes the first — only one
menu open at a time, matching VSCode/Theia. Pinned by the
`replaces the first` test.
3. New `downloadFileByPath(path)` in `useFilesApi` — fetches via the
existing GET /workspaces/<id>/files/<path>?root= endpoint and
triggers a browser download. Distinct from the existing
`handleDownloadFile` which downloads the in-editor buffer
(round-trips unsaved edits to disk); the context-menu download
targets arbitrary tree rows the user hasn't opened.
4. `canDelete` prop threaded from FilesTab → FileTree → menu →
item. Same gate as the toolbar (Clear/New/Upload all gated to
/configs); context menu's Delete renders as disabled with a
muted background on other roots, matching the "feature exists
but isn't applicable here" pattern.
## Test coverage
`FileTreeContextMenu.test.tsx` (8 tests):
- File row → menu opens with Open + Download + Delete.
- Directory row → menu opens with Delete only.
- Click Download → onDownload(path) fires + menu closes.
- Click Delete (canDelete=true) → onDelete(path) fires.
- Click Delete (canDelete=false) → onDelete NOT called + menu stays
open (disabled-state UX).
- Esc dismisses.
- Outside-click (mousedown on document.body) dismisses.
- Opening second context menu replaces the first (only-one-open
invariant).
Each test uses fireEvent + screen.getByRole, so they fail on a
deleted-code regression — none would pass on the pre-PR shape.
## Three weakest spots (hostile self-review)
1. The menu is positioned at `clientX/clientY` without viewport
clamping. If the user right-clicks at the very bottom-right of
the panel, part of the menu may overflow off-screen. VSCode
handles this by flipping the anchor; we don't yet. Acceptable
v1 because the FilesTab is fixed-width (≤ side-panel width)
and the menu is small (140×~80px); the overflow would be a few
pixels of one item. Filed as a follow-up.
2. Auto-focus on the first item shifts keyboard focus away from
the row that opened the menu. Closing with Esc returns focus
to the body, not the row. Same behavior as TerminalTab's
placeholder + the canvas's other context menus; consistent
isn't ideal but at least uniform. Documented inline.
3. The download request reuses the API client's 15s default
timeout — large config files (multi-MB skill bundles) on a
slow connection could time out. Same risk applies to the
existing toolbar Export. If we see real download failures we
can add a `timeoutMs` override at the call site without
touching the menu.
## Verification
- `npx tsc --noEmit` clean
- 176/176 canvas tab tests pass
- Manual on local dev: right-click a config.yaml row → menu opens
→ click Download → file lands in Downloads. Right-click on
/home root → Delete renders disabled.
Refs #2999. Pairs with PR-A (backend EIC) — without PR-A the tree
is empty and there's nothing to right-click on a SaaS workspace.
🤖 Generated with [Claude Code](https://claude.com/claude-code)