CRITICAL (#164):
POST /bundles/import — anon callers could create arbitrary workspaces
with user-supplied system prompts, plugins, and secrets envelopes.
Fixed by gating behind AdminAuth (bundleAdmin group).
HIGH (#165):
GET /bundles/export/:id — anon UUID probe leaked full system prompts,
agent_card, plugins, memory for any workspace.
GET /events + GET /events/:workspaceId — anon read of the append-only
event log leaked org topology, workspace names, card fragments.
Both moved into the same bundleAdmin / eventsAdmin groups.
MEDIUM (#166):
PUT /canvas/viewport — anon callers could reset shared viewport state.
Gated via a scoped viewportAdmin group; GET stays open so canvas
bootstraps without a bearer.
GET /admin/liveness — operational-intel leak (scheduler cadence
reveals work pattern). Inline AdminAuth on the single handler.
All 6 routes use the same lazy-bootstrap admin auth the rest of the
platform uses: zero-token installs fail-open, once any token exists
every request must present a valid bearer.
Known follow-up: canvas uses session cookies not bearer tokens (same
pattern as #138). In multi-tenant production these canvas features —
Events tab, Export/Duplicate, viewport persist — will return 401 once
a workspace is token-enrolled. Needs cookie-accepting AdminAuth as a
follow-up (tracked as option B in #138 triage discussion); a new issue
will be filed for that scope. The security gain from closing #164
CRITICAL outweighs the canvas UX regression for tonight.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
CRITICAL (#164):
POST /bundles/import — anon callers could create arbitrary workspaces
with user-supplied system prompts, plugins, and secrets envelopes.
Fixed by gating behind AdminAuth (bundleAdmin group).
HIGH (#165):
GET /bundles/export/:id — anon UUID probe leaked full system prompts,
agent_card, plugins, memory for any workspace.
GET /events + GET /events/:workspaceId — anon read of the append-only
event log leaked org topology, workspace names, card fragments.
Both moved into the same bundleAdmin / eventsAdmin groups.
MEDIUM (#166):
PUT /canvas/viewport — anon callers could reset shared viewport state.
Gated via a scoped viewportAdmin group; GET stays open so canvas
bootstraps without a bearer.
GET /admin/liveness — operational-intel leak (scheduler cadence
reveals work pattern). Inline AdminAuth on the single handler.
All 6 routes use the same lazy-bootstrap admin auth the rest of the
platform uses: zero-token installs fail-open, once any token exists
every request must present a valid bearer.
Known follow-up: canvas uses session cookies not bearer tokens (same
pattern as #138). In multi-tenant production these canvas features —
Events tab, Export/Duplicate, viewport persist — will return 401 once
a workspace is token-enrolled. Needs cookie-accepting AdminAuth as a
follow-up (tracked as option B in #138 triage discussion); a new issue
will be filed for that scope. The security gain from closing #164
CRITICAL outweighs the canvas UX regression for tonight.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Closes#138. #125 moved PATCH /workspaces/:id into the wsAdmin AdminAuth
group to close the #120 unauth vulnerability, but broke canvas drag-
reposition and inline rename because canvas uses session cookies not
bearer tokens. Multi-tenant deployments with any live token would have
seen every canvas PATCH 401.
Option A per #138 triage: PATCH goes back on the open router, but
WorkspaceHandler.Update now enforces field-level authz:
Cosmetic (no bearer required):
name, role, x, y, canvas
Sensitive (bearer required when any live token exists):
tier — resource escalation
parent_id — A2A hierarchy manipulation
runtime — container image swap
workspace_dir — host bind-mount redirection
Fail-open bootstrap: HasAnyLiveTokenGlobal = 0 → pass-through
(fresh install, pre-Phase-30 upgrade path). Matches the same
lazy-bootstrap contract WorkspaceAuth and AdminAuth use elsewhere.
3 new tests cover all three branches of the matrix (cosmetic
no-bearer, sensitive no-bearer-rejected, sensitive fail-open).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Closes#138. #125 moved PATCH /workspaces/:id into the wsAdmin AdminAuth
group to close the #120 unauth vulnerability, but broke canvas drag-
reposition and inline rename because canvas uses session cookies not
bearer tokens. Multi-tenant deployments with any live token would have
seen every canvas PATCH 401.
Option A per #138 triage: PATCH goes back on the open router, but
WorkspaceHandler.Update now enforces field-level authz:
Cosmetic (no bearer required):
name, role, x, y, canvas
Sensitive (bearer required when any live token exists):
tier — resource escalation
parent_id — A2A hierarchy manipulation
runtime — container image swap
workspace_dir — host bind-mount redirection
Fail-open bootstrap: HasAnyLiveTokenGlobal = 0 → pass-through
(fresh install, pre-Phase-30 upgrade path). Matches the same
lazy-bootstrap contract WorkspaceAuth and AdminAuth use elsewhere.
3 new tests cover all three branches of the matrix (cosmetic
no-bearer, sensitive no-bearer-rejected, sensitive fail-open).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
#125 added a SELECT EXISTS guard before WorkspaceHandler.Update applies
any UPDATE so nonexistent workspace IDs return 404 instead of silent
zero-row successes. The 4 existing WorkspaceUpdate_* sqlmock tests
didn't mock the probe, so they broke on main. This was not caught
because CI is blocked by the Actions billing cap.
Adds ExpectQuery for the EXISTS probe to:
- TestWorkspaceUpdate_ParentID
- TestWorkspaceUpdate_NameOnly
- TestWorkspaceUpdate_MultipleFields
- TestWorkspaceUpdate_RuntimeField
TestWorkspaceUpdate_BadJSON doesn't need the fix — it aborts on
c.ShouldBindJSON before reaching the guard.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
#125 added a SELECT EXISTS guard before WorkspaceHandler.Update applies
any UPDATE so nonexistent workspace IDs return 404 instead of silent
zero-row successes. The 4 existing WorkspaceUpdate_* sqlmock tests
didn't mock the probe, so they broke on main. This was not caught
because CI is blocked by the Actions billing cap.
Adds ExpectQuery for the EXISTS probe to:
- TestWorkspaceUpdate_ParentID
- TestWorkspaceUpdate_NameOnly
- TestWorkspaceUpdate_MultipleFields
- TestWorkspaceUpdate_RuntimeField
TestWorkspaceUpdate_BadJSON doesn't need the fix — it aborts on
c.ShouldBindJSON before reaching the guard.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Supersedes #158 (10-min uniform bump). That PR was too blunt — it treated
research/audit/orchestration crons the same when they have fundamentally
different cost/value/cadence profiles.
## The split
Three layers, three cadences, grounded in the survey of Hermes/Letta/
Trigger.dev/Inngest/AG2/Rivet/n8n/Composio/SWE-agent done this session.
Nobody in that survey runs while(true) per agent — they all combine
event-driven reactivity with short orchestration pulses on a coordinator.
This PR implements that split for our 12-workspace template.
| Layer | Roles | Cadence | Purpose |
|---|---|---|---|
| Orchestration | PM, Dev Lead, Research Lead | every 5 min | Check backlog, dispatch work, review completed tasks |
| Audit | Security Auditor | every 10 min | Focused security audit |
| Audit | UI/UX Designer | every 15 min | Vision-heavy, dial back from 10 |
| Deep-work | Research Lead (eco-watch) | every 30 min (8,38) | Was hourly |
| Deep-work | Dev Lead (template fitness) | every 30 min (15,45) | Was hourly |
| Deep-work | Technical Researcher (plugins) | hourly (unchanged) | Research-heavy, slow |
| Deep-work | DevOps (channels) | hourly (unchanged) | Research-heavy, slow |
| Reactive | BE, FE, DevOps, Docs | no cron | Execute A2A delegations |
## Orchestration pulse prompts
The three new schedules each carry a detailed orchestration_prompt:
- **PM** (5-min): scan all 12 workspaces, scan GH PRs/issues backlog
(external), scan memory backlog (internal), dispatch up to 3 tasks per
pulse, review completed work, write pulse summary to memory. Hard
rules: under 90s wall-clock, never dispatch to busy agents, write
"orchestrator-clean" and stop if genuinely nothing to do.
- **Dev Lead** (5-min, offset +1 from PM): same shape, scoped to
engineering team. Reviews open PRs from direct reports, matches idle
engineers to labeled GH issues (security/bug/feature), dispatches with
"fix/issue-N-slug" branch convention. Skips pulse if own template
fitness audit is in flight (:15, :45).
- **Research Lead** (5-min, offset +2 from PM): same shape, scoped to
research team. Matches Market Analyst / Technical Researcher /
Competitive Intelligence to research-labeled issues or memory-stashed
questions. Max 2 A2A per pulse (research is slow). Skips pulse if own
eco-watch is in flight (:8, :38).
## Cadence offset table
No two crons fire in the same minute:
:01,:11,:21,:31,:41,:51 — Security audit (Security Auditor)
:02,:07,:12,:17,:22,:27,:32,:37,:42,:47,:52,:57 — Dev Lead orchestrator
:04,:09,:14,:19,:24,:29,:34,:39,:44,:49,:54,:59 — Research Lead orchestrator
:01,:06,:11,:16,:21,:26,:31,:36,:41,:46,:51,:56 — PM orchestrator
:05,:20,:35,:50 — UI/UX audit (UIUX Designer)
:08,:38 — Ecosystem watch deep-work (Research Lead)
:15,:45 — Template fitness deep-work (Dev Lead)
:22 — Plugin curation (Technical Researcher)
:47 — Channel expansion (DevOps Engineer)
Note PM and Security Auditor share :01 — this is fine because they
target different workspaces so scheduler concurrency handles it.
## Cost estimate
- PM pulse: 12/hour × 24 × ~3k tokens = 864k tokens/day/org ~ $5/day
- Dev Lead pulse: same ~ $5/day
- Research Lead pulse: same ~ $5/day
- Audits (security 10min, UIUX 15min): ~$8/day/org combined
- Deep-work crons (unchanged from original): ~$4/day/org
**Total ~$27/day/org**. Comparable to #158's $25 but MUCH higher
utility because orchestration produces dispatches that keep workers
busy, whereas #158 just fired more audits against the same team.
Closes#158 (superseded — will close that PR with a pointer to this one).
## Related research
See docs/ecosystem-watch.md `### Hermes Agent` and today's research agent
output: event-driven + reflection-on-completion + short orchestration
pulses on leaders is the shape that delivers 24/7 activity without
runaway cost. This is the concrete implementation.
Supersedes #158 (10-min uniform bump). That PR was too blunt — it treated
research/audit/orchestration crons the same when they have fundamentally
different cost/value/cadence profiles.
## The split
Three layers, three cadences, grounded in the survey of Hermes/Letta/
Trigger.dev/Inngest/AG2/Rivet/n8n/Composio/SWE-agent done this session.
Nobody in that survey runs while(true) per agent — they all combine
event-driven reactivity with short orchestration pulses on a coordinator.
This PR implements that split for our 12-workspace template.
| Layer | Roles | Cadence | Purpose |
|---|---|---|---|
| Orchestration | PM, Dev Lead, Research Lead | every 5 min | Check backlog, dispatch work, review completed tasks |
| Audit | Security Auditor | every 10 min | Focused security audit |
| Audit | UI/UX Designer | every 15 min | Vision-heavy, dial back from 10 |
| Deep-work | Research Lead (eco-watch) | every 30 min (8,38) | Was hourly |
| Deep-work | Dev Lead (template fitness) | every 30 min (15,45) | Was hourly |
| Deep-work | Technical Researcher (plugins) | hourly (unchanged) | Research-heavy, slow |
| Deep-work | DevOps (channels) | hourly (unchanged) | Research-heavy, slow |
| Reactive | BE, FE, DevOps, Docs | no cron | Execute A2A delegations |
## Orchestration pulse prompts
The three new schedules each carry a detailed orchestration_prompt:
- **PM** (5-min): scan all 12 workspaces, scan GH PRs/issues backlog
(external), scan memory backlog (internal), dispatch up to 3 tasks per
pulse, review completed work, write pulse summary to memory. Hard
rules: under 90s wall-clock, never dispatch to busy agents, write
"orchestrator-clean" and stop if genuinely nothing to do.
- **Dev Lead** (5-min, offset +1 from PM): same shape, scoped to
engineering team. Reviews open PRs from direct reports, matches idle
engineers to labeled GH issues (security/bug/feature), dispatches with
"fix/issue-N-slug" branch convention. Skips pulse if own template
fitness audit is in flight (:15, :45).
- **Research Lead** (5-min, offset +2 from PM): same shape, scoped to
research team. Matches Market Analyst / Technical Researcher /
Competitive Intelligence to research-labeled issues or memory-stashed
questions. Max 2 A2A per pulse (research is slow). Skips pulse if own
eco-watch is in flight (:8, :38).
## Cadence offset table
No two crons fire in the same minute:
:01,:11,:21,:31,:41,:51 — Security audit (Security Auditor)
:02,:07,:12,:17,:22,:27,:32,:37,:42,:47,:52,:57 — Dev Lead orchestrator
:04,:09,:14,:19,:24,:29,:34,:39,:44,:49,:54,:59 — Research Lead orchestrator
:01,:06,:11,:16,:21,:26,:31,:36,:41,:46,:51,:56 — PM orchestrator
:05,:20,:35,:50 — UI/UX audit (UIUX Designer)
:08,:38 — Ecosystem watch deep-work (Research Lead)
:15,:45 — Template fitness deep-work (Dev Lead)
:22 — Plugin curation (Technical Researcher)
:47 — Channel expansion (DevOps Engineer)
Note PM and Security Auditor share :01 — this is fine because they
target different workspaces so scheduler concurrency handles it.
## Cost estimate
- PM pulse: 12/hour × 24 × ~3k tokens = 864k tokens/day/org ~ $5/day
- Dev Lead pulse: same ~ $5/day
- Research Lead pulse: same ~ $5/day
- Audits (security 10min, UIUX 15min): ~$8/day/org combined
- Deep-work crons (unchanged from original): ~$4/day/org
**Total ~$27/day/org**. Comparable to #158's $25 but MUCH higher
utility because orchestration produces dispatches that keep workers
busy, whereas #158 just fired more audits against the same team.
Closes#158 (superseded — will close that PR with a pointer to this one).
## Related research
See docs/ecosystem-watch.md `### Hermes Agent` and today's research agent
output: event-driven + reflection-on-completion + short orchestration
pulses on leaders is the shape that delivers 24/7 activity without
runaway cost. This is the concrete implementation.
Two new entries added from the second daily pass (first run merged as PR #150
at 03:20 UTC). Both surfaced in the afternoon trending windows and were not
covered by the morning run.
- microsoft/agent-framework (~9.5k ⭐): official Microsoft successor to
AutoGen; ships migration guide and April 2026 .NET release. Directly affects
our autogen adapter in workspace-template/adapters/. Filed issue #156 to
evaluate adapter update.
- vercel-labs/open-agents (~2.2k ⭐, +1,020 today): cloud coding agent template
from Vercel Labs (same team as Skills CLI). Notable for agent-outside-sandbox
architecture and snapshot-based VM resumption — a more efficient approach
than our current Docker restart + git-clone pattern.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Two new entries added from the second daily pass (first run merged as PR #150
at 03:20 UTC). Both surfaced in the afternoon trending windows and were not
covered by the morning run.
- microsoft/agent-framework (~9.5k ⭐): official Microsoft successor to
AutoGen; ships migration guide and April 2026 .NET release. Directly affects
our autogen adapter in workspace-template/adapters/. Filed issue #156 to
evaluate adapter update.
- vercel-labs/open-agents (~2.2k ⭐, +1,020 today): cloud coding agent template
from Vercel Labs (same team as Skills CLI). Notable for agent-outside-sandbox
architecture and snapshot-based VM resumption — a more efficient approach
than our current Docker restart + git-clone pattern.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Closes#151. The middleware was already implemented + tested (3 passing
tests in securityheaders_test.go covering base set, multi-route, and
the don't-override-existing contract) but never registered in router.go.
One-line wire-up, runs after TenantGuard so rejected requests still
get the same headers as accepted ones, and before routes so handlers
can still opt out by setting their own header before c.Next() returns.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Closes#151. The middleware was already implemented + tested (3 passing
tests in securityheaders_test.go covering base set, multi-route, and
the don't-override-existing contract) but never registered in router.go.
One-line wire-up, runs after TenantGuard so rejected requests still
get the same headers as accepted ones, and before routes so handlers
can still opt out by setting their own header before c.Next() returns.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The #95 scheduler heartbeat scheme relied on:
1. Top of tick() (once per poll interval)
2. Per-fire goroutine entry + exit
That leaves a gap: tick() ends with wg.Wait(), so if a single fire takes
longer than pollInterval (UIUX audits routinely take 60-120s; max fireTimeout
is 5min), the next tick doesn't run and no top-of-tick heartbeat fires.
Per-fire heartbeats only bracket the fire — between entry and the HTTP
response returning, nothing heartbeats either.
Observed today: /admin/liveness reports seconds_ago=251 while docker logs
show the scheduler actively firing 'Hourly ecosystem watch'. Scheduler is
fine; liveness is lying.
Adds an independent 10s heartbeat pulse goroutine inside Start(), decoupled
from tick completion. The existing heartbeats at tick top + per-fire are
kept as redundant signals but this pulse is the one that guarantees liveness
freshness regardless of what tick is doing.
Ships the exact fix proposed in #140 body.
Closes#140.
The #95 scheduler heartbeat scheme relied on:
1. Top of tick() (once per poll interval)
2. Per-fire goroutine entry + exit
That leaves a gap: tick() ends with wg.Wait(), so if a single fire takes
longer than pollInterval (UIUX audits routinely take 60-120s; max fireTimeout
is 5min), the next tick doesn't run and no top-of-tick heartbeat fires.
Per-fire heartbeats only bracket the fire — between entry and the HTTP
response returning, nothing heartbeats either.
Observed today: /admin/liveness reports seconds_ago=251 while docker logs
show the scheduler actively firing 'Hourly ecosystem watch'. Scheduler is
fine; liveness is lying.
Adds an independent 10s heartbeat pulse goroutine inside Start(), decoupled
from tick completion. The existing heartbeats at tick top + per-fire are
kept as redundant signals but this pulse is the one that guarantees liveness
freshness regardless of what tick is doing.
Ships the exact fix proposed in #140 body.
Closes#140.
Closes#133. Both roles previously inherited defaults only (ecc,
molecule-dev, superpowers, careful-bash, prompt-watchdog, audit-trail,
session-context, cron-learnings, update-docs) — no review skill.
Dev Lead enforces PR quality gates per triage SKILL.md; QA Engineer
reviews test coverage against acceptance criteria. Both need the
16-criteria code-review rubric and llm-judge to operate deterministically.
Mirrors Security Auditor's existing \`[molecule-skill-code-review,
molecule-skill-cross-vendor-review, molecule-skill-llm-judge]\` override.
Dropped cross-vendor from these two since it's a noteworthy-PR tool —
the workflow-triage entry in defaults already gates that for the ticks
that need it.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Closes#133. Both roles previously inherited defaults only (ecc,
molecule-dev, superpowers, careful-bash, prompt-watchdog, audit-trail,
session-context, cron-learnings, update-docs) — no review skill.
Dev Lead enforces PR quality gates per triage SKILL.md; QA Engineer
reviews test coverage against acceptance criteria. Both need the
16-criteria code-review rubric and llm-judge to operate deterministically.
Mirrors Security Auditor's existing \`[molecule-skill-code-review,
molecule-skill-cross-vendor-review, molecule-skill-llm-judge]\` override.
Dropped cross-vendor from these two since it's a noteworthy-PR tool —
the workflow-triage entry in defaults already gates that for the ticks
that need it.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Security fix merging despite CI outage (issue #136 — runner failing since 07:22, all jobs fail in 1-2s with no log output, infrastructure issue confirmed across 28 consecutive runs).
Issue #120 confirmed live by Security Auditor (cycle 3):
curl -X PATCH .../workspaces/00000000-... -d '{"name":"probe"}' → 200 (no token)
Code reviewed and approved by Security Auditor. Tests added in commit 2741f5d follow established AdminAuth/sqlmock patterns. CI outage is unrelated to these changes.
Security fix merging despite CI outage (issue #136 — runner failing since 07:22, all jobs fail in 1-2s with no log output, infrastructure issue confirmed across 28 consecutive runs).
Issue #120 confirmed live by Security Auditor (cycle 3):
curl -X PATCH .../workspaces/00000000-... -d '{"name":"probe"}' → 200 (no token)
Code reviewed and approved by Security Auditor. Tests added in commit 76cb7c3 follow established AdminAuth/sqlmock patterns. CI outage is unrelated to these changes.