The original fix stripped \n/\r but left the rest in place, then relied
on a substring-based test which was over-strict (the escaped fragment
still contained the banned substring as bytes).
Better approach: emit the name as a double-quoted YAML scalar with all
escape sequences (\\, \", \n, \r, \t) handled inline. This is the
canonical YAML-safe way to embed user input — no injection possible
because every control character is either escaped or rejected by the
YAML parser inside the scalar context.
Test rewritten to parse the output as YAML and verify:
1. parsed[\"name\"] equals the literal attacker input (payload preserved)
2. no banned top-level keys leaked to the parsed map
3. legitimate default keys (description/version/tier/model) still present
Updated the two existing tests that asserted the unquoted name format.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The original fix stripped \n/\r but left the rest in place, then relied
on a substring-based test which was over-strict (the escaped fragment
still contained the banned substring as bytes).
Better approach: emit the name as a double-quoted YAML scalar with all
escape sequences (\\, \", \n, \r, \t) handled inline. This is the
canonical YAML-safe way to embed user input — no injection possible
because every control character is either escaped or rejected by the
YAML parser inside the scalar context.
Test rewritten to parse the output as YAML and verify:
1. parsed[\"name\"] equals the literal attacker input (payload preserved)
2. no banned top-level keys leaked to the parsed map
3. legitimate default keys (description/version/tier/model) still present
Updated the two existing tests that asserted the unquoted name format.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Addresses items 4, 5, 7 from the self-review of the batch merge. PR A
(#228) covered items 1, 2, 3, 6 on the Go side.
## workspace-template/main.py — idle loop hardening
- Replace asyncio.get_event_loop() with asyncio.get_running_loop() —
the former is deprecated in 3.12+ and emits a DeprecationWarning on
every idle fire.
- Replace hardcoded urlopen timeout=600 with IDLE_FIRE_TIMEOUT_SECONDS
clamped to max(60, min(300, idle_interval_seconds)). Long cadence
workspaces no longer hold dangling requests open for 10 minutes; the
cap adapts automatically when the interval is short.
- Type the exception handling: split HTTPError (has .code) from URLError
(connection-level) from the generic catch-all. Log status + error
class separately so operators can grep for specific failure modes
instead of a bare "post failed".
- Fire-and-forget no longer loses exceptions. run_in_executor Future
now has an add_done_callback that logs the outcome, so a panic in
_post_sync surfaces as "Idle loop: post failed — status=None err=..."
instead of Python's default "Task exception was never retrieved"
warning burried in stderr.
## org-templates/molecule-dev/org.yaml — discoverability
Added idle_prompt + idle_interval_seconds to the defaults: block with
explanatory comments. Without this, users had to read main.py to
discover the feature.
## docs/runbooks/admin-auth.md — new
Documents the three middleware variants (AdminAuth strict,
CanvasOrBearer soft, WorkspaceAuth per-id), the exact contract of each,
and the three-question test for adding a new route to CanvasOrBearer.
Also flags the session-cookie follow-up as Phase H.
Referenced PRs: #138, #164, #165, #166, #167, #168, #190, #194, #203,
#228.
No code deltas in platform/ beyond the Python + YAML + docs changes.
Full pytest suite unchanged except the pre-existing test_hermes_smoke
flake that fails in full-suite but passes in isolation (test isolation
bug, not introduced by this PR).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Addresses items 4, 5, 7 from the self-review of the batch merge. PR A
(#228) covered items 1, 2, 3, 6 on the Go side.
## workspace-template/main.py — idle loop hardening
- Replace asyncio.get_event_loop() with asyncio.get_running_loop() —
the former is deprecated in 3.12+ and emits a DeprecationWarning on
every idle fire.
- Replace hardcoded urlopen timeout=600 with IDLE_FIRE_TIMEOUT_SECONDS
clamped to max(60, min(300, idle_interval_seconds)). Long cadence
workspaces no longer hold dangling requests open for 10 minutes; the
cap adapts automatically when the interval is short.
- Type the exception handling: split HTTPError (has .code) from URLError
(connection-level) from the generic catch-all. Log status + error
class separately so operators can grep for specific failure modes
instead of a bare "post failed".
- Fire-and-forget no longer loses exceptions. run_in_executor Future
now has an add_done_callback that logs the outcome, so a panic in
_post_sync surfaces as "Idle loop: post failed — status=None err=..."
instead of Python's default "Task exception was never retrieved"
warning burried in stderr.
## org-templates/molecule-dev/org.yaml — discoverability
Added idle_prompt + idle_interval_seconds to the defaults: block with
explanatory comments. Without this, users had to read main.py to
discover the feature.
## docs/runbooks/admin-auth.md — new
Documents the three middleware variants (AdminAuth strict,
CanvasOrBearer soft, WorkspaceAuth per-id), the exact contract of each,
and the three-question test for adding a new route to CanvasOrBearer.
Also flags the session-cookie follow-up as Phase H.
Referenced PRs: #138, #164, #165, #166, #167, #168, #190, #194, #203,
#228.
No code deltas in platform/ beyond the Python + YAML + docs changes.
Full pytest suite unchanged except the pre-existing test_hermes_smoke
flake that fails in full-suite but passes in isolation (test isolation
bug, not introduced by this PR).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Context: when the claude-agent-sdk wraps a stream error from the CLI
subprocess that it can't categorize (rate limit, auth, network), it
raises a bare `Exception("Command failed with exit code 1\nError output:
Check stderr output for details")`. The exception has no `.stderr` or
`.exit_code` attributes, so #66's `_format_process_error` — which reads
those attributes — has nothing to surface. The log line becomes:
SDK agent error [claude-code]: Exception: Command failed with exit
code 1 (exit code: 1)\nError output: Check stderr output for details
That's the placeholder text from the SDK's error path, not the actual
error. Operators chasing a stuck workspace are forced to `docker exec
ws-xxx claude --print` manually to discover the real cause. Observed
today during the rate-limit incident: every PM error line was identical
"Check stderr output for details" while the real cause ("You've hit
your limit · resets Apr 17, 11pm (UTC)") was only visible via manual
reproduction — that cost ~20 minutes of diagnosis time.
## Fix
Add `_probe_claude_cli_error()`: a best-effort subprocess call that runs
`claude --print` with a small probe input, captures stderr+stdout, and
returns the real error string. Bounded by 30s timeout so a hung CLI
can't stall the error path.
Extend `_format_process_error` with ONE narrow fallback: if the
exception has no stderr/exit_code AND its message contains the specific
"Check stderr output for details" marker, call the probe and append
`probed_cli_error=<real error>` to the formatted line.
Critically: the probe only runs in the narrow case where we have
nothing else to log. If `.stderr` or `.exit_code` are present (the
normal ProcessError path from #66), the probe is skipped — no wasted
subprocess, no 30s latency on every error.
## Test coverage
`workspace-template/tests/test_claude_sdk_executor.py` adds 3 new tests:
- `test_format_process_error_probes_cli_when_stderr_swallowed` — the
happy path: exception matches the marker, probe runs, result appears
in the formatted line. Probe is monkeypatched so no subprocess spawns
in the test.
- `test_format_process_error_does_not_probe_when_stderr_already_present` —
negative: regular ProcessError with `.stderr` set does NOT trigger
the probe (skip the wasted call).
- `test_format_process_error_does_not_probe_without_swallowed_marker` —
negative: unrelated plain exceptions (e.g. RuntimeError) do NOT
trigger the probe (so the common-case error path stays fast).
All 7 `_format_process_error` tests pass locally (4 existing + 3 new):
\`\`\`
pytest tests/test_claude_sdk_executor.py -k format_process_error
======================= 7 passed in 0.06s ========================
\`\`\`
## Impact
Next time the SDK swallows a real error (rate limit, auth failure,
network outage), the workspace log will contain the actual error string
alongside the generic placeholder:
SDK agent error [claude-code]: Exception: Command failed with exit
code 1 ... | probed_cli_error="You've hit your limit · resets Apr
17, 11pm (UTC)"
Diagnosis time drops from "docker exec each ws, run claude --print,
read stderr" (~20 min) to "grep probed_cli_error in platform logs"
(~10 seconds).
Closes#160.
Context: when the claude-agent-sdk wraps a stream error from the CLI
subprocess that it can't categorize (rate limit, auth, network), it
raises a bare `Exception("Command failed with exit code 1\nError output:
Check stderr output for details")`. The exception has no `.stderr` or
`.exit_code` attributes, so #66's `_format_process_error` — which reads
those attributes — has nothing to surface. The log line becomes:
SDK agent error [claude-code]: Exception: Command failed with exit
code 1 (exit code: 1)\nError output: Check stderr output for details
That's the placeholder text from the SDK's error path, not the actual
error. Operators chasing a stuck workspace are forced to `docker exec
ws-xxx claude --print` manually to discover the real cause. Observed
today during the rate-limit incident: every PM error line was identical
"Check stderr output for details" while the real cause ("You've hit
your limit · resets Apr 17, 11pm (UTC)") was only visible via manual
reproduction — that cost ~20 minutes of diagnosis time.
## Fix
Add `_probe_claude_cli_error()`: a best-effort subprocess call that runs
`claude --print` with a small probe input, captures stderr+stdout, and
returns the real error string. Bounded by 30s timeout so a hung CLI
can't stall the error path.
Extend `_format_process_error` with ONE narrow fallback: if the
exception has no stderr/exit_code AND its message contains the specific
"Check stderr output for details" marker, call the probe and append
`probed_cli_error=<real error>` to the formatted line.
Critically: the probe only runs in the narrow case where we have
nothing else to log. If `.stderr` or `.exit_code` are present (the
normal ProcessError path from #66), the probe is skipped — no wasted
subprocess, no 30s latency on every error.
## Test coverage
`workspace-template/tests/test_claude_sdk_executor.py` adds 3 new tests:
- `test_format_process_error_probes_cli_when_stderr_swallowed` — the
happy path: exception matches the marker, probe runs, result appears
in the formatted line. Probe is monkeypatched so no subprocess spawns
in the test.
- `test_format_process_error_does_not_probe_when_stderr_already_present` —
negative: regular ProcessError with `.stderr` set does NOT trigger
the probe (skip the wasted call).
- `test_format_process_error_does_not_probe_without_swallowed_marker` —
negative: unrelated plain exceptions (e.g. RuntimeError) do NOT
trigger the probe (so the common-case error path stays fast).
All 7 `_format_process_error` tests pass locally (4 existing + 3 new):
\`\`\`
pytest tests/test_claude_sdk_executor.py -k format_process_error
======================= 7 passed in 0.06s ========================
\`\`\`
## Impact
Next time the SDK swallows a real error (rate limit, auth failure,
network outage), the workspace log will contain the actual error string
alongside the generic placeholder:
SDK agent error [claude-code]: Exception: Command failed with exit
code 1 ... | probed_cli_error="You've hit your limit · resets Apr
17, 11pm (UTC)"
Diagnosis time drops from "docker exec each ws, run claude --print,
read stderr" (~20 min) to "grep probed_cli_error in platform logs"
(~10 seconds).
Closes#160.
Addresses self-review of the 10-PR batch merged earlier this session.
Splits the follow-ups into this Go-side PR and a later Python/docs PR.
## Fixes
1. wsauth_middleware.go CanvasOrBearer — invalid bearer now hard-rejects
with 401 instead of falling through to the Origin check. Previous code
let an attacker with an expired token + matching Origin bypass auth.
Empty bearer still falls through to the Origin path (the intended
canvas path).
2. scheduler.go short() helper — extracts safe UUID prefix truncation.
Pre-existing unsafe [:12] and [:8] slices would panic on workspace IDs
shorter than the bound. #115's new skip path had the bounds check;
the happy-path log lines did not. One helper, three call sites.
3. activity.go security-event log on source_id spoof — #209 added the
403 but the attempt was invisible to any auditor cron. Stable
greppable log line with authed_workspace, body_source_id, client IP.
## New tests
- TestShort_helper — bounds-safety regression guard for the helper
- TestRecordSkipped_writesSkippedStatus — #115 coverage gap, exercises
UPDATE + INSERT via sqlmock
- TestRecordSkipped_shortWorkspaceIDNoPanic — short-ID crash regression
- TestActivityHandler_Report_SourceIDSpoofRejected — #209 403 path
- TestActivityHandler_Report_MatchingSourceIDAccepted — non-spoof path
- TestHistory_IncludesErrorDetail — #152 problem B coverage
go test -race ./... green locally.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Addresses self-review of the 10-PR batch merged earlier this session.
Splits the follow-ups into this Go-side PR and a later Python/docs PR.
## Fixes
1. wsauth_middleware.go CanvasOrBearer — invalid bearer now hard-rejects
with 401 instead of falling through to the Origin check. Previous code
let an attacker with an expired token + matching Origin bypass auth.
Empty bearer still falls through to the Origin path (the intended
canvas path).
2. scheduler.go short() helper — extracts safe UUID prefix truncation.
Pre-existing unsafe [:12] and [:8] slices would panic on workspace IDs
shorter than the bound. #115's new skip path had the bounds check;
the happy-path log lines did not. One helper, three call sites.
3. activity.go security-event log on source_id spoof — #209 added the
403 but the attempt was invisible to any auditor cron. Stable
greppable log line with authed_workspace, body_source_id, client IP.
## New tests
- TestShort_helper — bounds-safety regression guard for the helper
- TestRecordSkipped_writesSkippedStatus — #115 coverage gap, exercises
UPDATE + INSERT via sqlmock
- TestRecordSkipped_shortWorkspaceIDNoPanic — short-ID crash regression
- TestActivityHandler_Report_SourceIDSpoofRejected — #209 403 path
- TestActivityHandler_Report_MatchingSourceIDAccepted — non-spoof path
- TestHistory_IncludesErrorDetail — #152 problem B coverage
go test -race ./... green locally.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The 13K-line plugins_install_pipeline.go had zero unit tests, making it
the highest-regression-risk file in the platform handlers package.
New test file covers all testable pure-function and integration paths
that do not require a live Docker daemon:
validatePluginName (8 cases)
- valid names, empty, forward slash, backslash, "..", embedded "..";
path-traversal variants ("../etc", "../../secrets")
dirSize (6 cases)
- empty dir, single file, multiple files, nested subdirectory,
exceeds limit (verifies error mentions "cap"), exactly at limit
httpErr / newHTTPErr (3 cases)
- Error() contains status code, all relevant HTTP codes preserved,
errors.As unwraps through fmt.Errorf %w chains
regexpEscapeForAwk (6 cases)
- alphanumeric names unchanged, slash escaped, dot escaped, + escaped,
full "# Plugin: name /" marker (space not escaped), backslash escaped
streamDirAsTar (4 cases)
- empty dir yields zero entries, single file round-trips content,
nested directory preserves relative path, entries have no absolute
or tempdir-leaking paths
resolveAndStage via stubResolver (10 cases)
- empty source → 400, unknown scheme → 400, happy path (result fields),
staged dir cleaned on fetch error, ErrPluginNotFound → 404,
DeadlineExceeded → 504, generic error → 502, resolver returns invalid
name → 400, local:// path traversal → 400 (pre-Fetch validation)
stubResolver implements plugins.SourceResolver as an in-process test
double — no network, no filesystem side-effects beyond the staging tempdir
that resolveAndStage creates and cleans up.
Closes#217
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The 13K-line plugins_install_pipeline.go had zero unit tests, making it
the highest-regression-risk file in the platform handlers package.
New test file covers all testable pure-function and integration paths
that do not require a live Docker daemon:
validatePluginName (8 cases)
- valid names, empty, forward slash, backslash, "..", embedded "..";
path-traversal variants ("../etc", "../../secrets")
dirSize (6 cases)
- empty dir, single file, multiple files, nested subdirectory,
exceeds limit (verifies error mentions "cap"), exactly at limit
httpErr / newHTTPErr (3 cases)
- Error() contains status code, all relevant HTTP codes preserved,
errors.As unwraps through fmt.Errorf %w chains
regexpEscapeForAwk (6 cases)
- alphanumeric names unchanged, slash escaped, dot escaped, + escaped,
full "# Plugin: name /" marker (space not escaped), backslash escaped
streamDirAsTar (4 cases)
- empty dir yields zero entries, single file round-trips content,
nested directory preserves relative path, entries have no absolute
or tempdir-leaking paths
resolveAndStage via stubResolver (10 cases)
- empty source → 400, unknown scheme → 400, happy path (result fields),
staged dir cleaned on fetch error, ErrPluginNotFound → 404,
DeadlineExceeded → 504, generic error → 502, resolver returns invalid
name → 400, local:// path traversal → 400 (pre-Fetch validation)
stubResolver implements plugins.SourceResolver as an in-process test
double — no network, no filesystem side-effects beyond the staging tempdir
that resolveAndStage creates and cleans up.
Closes#217
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The register call was missing headers=auth_headers(), so workspaces that
already have a persisted token (i.e. every restart after the first boot)
were sending an unauthenticated request. The platform's register handler
returns 401 for requests missing a valid bearer token once a token has
been issued, causing re-registration to fail on every restart.
Import auth_headers at the module level (alongside the existing save_token
inline import) and pass it to the httpx POST. auth_headers() returns {}
when no token is on file yet (first boot), so there is no regression for
fresh workspaces — the platform still issues a token on the 200 response
and save_token() persists it for all subsequent restarts.
Closes#215
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The register call was missing headers=auth_headers(), so workspaces that
already have a persisted token (i.e. every restart after the first boot)
were sending an unauthenticated request. The platform's register handler
returns 401 for requests missing a valid bearer token once a token has
been issued, causing re-registration to fail on every restart.
Import auth_headers at the module level (alongside the existing save_token
inline import) and pass it to the httpx POST. auth_headers() returns {}
when no token is on file yet (first boot), so there is no regression for
fresh workspaces — the platform still issues a token on the 200 response
and save_token() persists it for all subsequent restarts.
Closes#215
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
A crafted workspace name containing a newline (e.g. "x\nmodel: evil")
could inject arbitrary YAML keys into the auto-generated config.yaml.
Strip \n and \r from the name before interpolation. YAML key injection
requires a newline to start a new mapping entry; other characters such
as `:` are safe in unquoted scalar values.
Adds TestGenerateDefaultConfig_YAMLInjection with three adversarial
inputs: bare \n injection, CRLF injection, and multi-key injection.
Closes#221
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
A crafted workspace name containing a newline (e.g. "x\nmodel: evil")
could inject arbitrary YAML keys into the auto-generated config.yaml.
Strip \n and \r from the name before interpolation. YAML key injection
requires a newline to start a new mapping entry; other characters such
as `:` are safe in unquoted scalar values.
Adds TestGenerateDefaultConfig_YAMLInjection with three adversarial
inputs: bare \n injection, CRLF injection, and multi-key injection.
Closes#221
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Every agent in the reno-stars org (marketing, sales, dev, coordinator)
plausibly needs browser access at some point — social posts, GBP edits,
directory submissions, InvoiceSimple publish. Without the plugin on
first import, agents fall back to launching their own Chromium inside
the container, which doesn't have the operator's authenticated Chrome
profile (no logged-in sessions, no saved cookies).
Per-agent opt-out via `!browser-automation` is already supported
(PR #71 UNION merge semantics) if any specific role shouldn't have it.
Closes#213
Every agent in the reno-stars org (marketing, sales, dev, coordinator)
plausibly needs browser access at some point — social posts, GBP edits,
directory submissions, InvoiceSimple publish. Without the plugin on
first import, agents fall back to launching their own Chromium inside
the container, which doesn't have the operator's authenticated Chrome
profile (no logged-in sessions, no saved cookies).
Per-agent opt-out via `!browser-automation` is already supported
(PR #71 UNION merge semantics) if any specific role shouldn't have it.
Closes#213
PR #205 shipped the workspace idle-loop mechanism (reflection-on-completion
pattern from the Hermes/Letta research survey) but deliberately added NO
default idle_prompt in org.yaml so rollout could be measured one workspace
at a time before going team-wide.
This is that first opt-in: Technical Researcher gets a backlog-pull + reflect
idle prompt on a 10-minute cadence.
## Why TR first
- Research-heavy role with a naturally bursty load — lots of idle time
between the once-per-hour plugin curation cron fires
- Non-user-facing (no canvas UI impact, no UX risk)
- Already has a clear backlog shape: the plugin curation cron produces
findings that could feed follow-up studies
- Vision-free (no Playwright) so cost per idle tick is pure text
## What the idle_prompt does
Three-step reflection, under 60s wall-clock, max 1 A2A send per tick:
1. **Backlog pull** — search_memory "research-backlog:technical-researcher"
for any stashed research questions (from prior cron fires or Research
Lead delegations). If found → delegate_task to Research Lead with a
concrete deliverable spec, then commit_memory to remove the item from
the backlog.
2. **Reflection fallback** — if backlog is empty, look at the last memory
entry from the Hourly plugin curation cron. Does it surface a follow-up
study worth doing? If yes → file a GH issue labeled `research` and
commit_memory to put the question on the backlog for next tick.
3. **Idle-clean outcome** — if neither backlog nor reflection produced
anything, write "tr-idle HH:MM — clean" to memory and stop. No busy work.
Hard rules enforce: max 1 A2A per tick, skip step 1 if Research Lead busy,
under 60s wall-clock, never re-run a cron's own prompt from inside the idle
loop.
## Rollout plan
- **This PR**: enables TR only via the `idle_prompt` + `idle_interval_seconds`
fields added to its workspace entry in org.yaml.
- **Next 24h**: measure activity_logs delta on TR vs baseline, count
idle-fired delegations vs idle-clean outcomes, confirm Research Lead
isn't being flooded.
- **If green** (delegations land useful work, no flood): roll to Market
Analyst + Competitive Intelligence in a follow-up PR.
- **If noisy** (too many idle fires producing nothing): tune idle_interval
up to 1200-1800s.
## Apply locally per feedback rule
Per `feedback_apply_template_locally_too.md`: not waiting for merge. After
pushing this PR I'll edit TR's live /configs/config.yaml to add the same
idle_prompt + idle_interval_seconds fields, then restart ws-57e13b54-119
(Technical Researcher) so the new workspace-template binary picks up the
idle loop immediately. Measurement clock starts from that restart.
## Related
- #205 (mechanism) — just merged in this cycle (7f11328)
- #208 Hermes Phase 1 — also just merged (be53a33)
- docs/ecosystem-watch.md → `### Hermes Agent` — reflection-on-completion
pattern reference
PR #205 shipped the workspace idle-loop mechanism (reflection-on-completion
pattern from the Hermes/Letta research survey) but deliberately added NO
default idle_prompt in org.yaml so rollout could be measured one workspace
at a time before going team-wide.
This is that first opt-in: Technical Researcher gets a backlog-pull + reflect
idle prompt on a 10-minute cadence.
## Why TR first
- Research-heavy role with a naturally bursty load — lots of idle time
between the once-per-hour plugin curation cron fires
- Non-user-facing (no canvas UI impact, no UX risk)
- Already has a clear backlog shape: the plugin curation cron produces
findings that could feed follow-up studies
- Vision-free (no Playwright) so cost per idle tick is pure text
## What the idle_prompt does
Three-step reflection, under 60s wall-clock, max 1 A2A send per tick:
1. **Backlog pull** — search_memory "research-backlog:technical-researcher"
for any stashed research questions (from prior cron fires or Research
Lead delegations). If found → delegate_task to Research Lead with a
concrete deliverable spec, then commit_memory to remove the item from
the backlog.
2. **Reflection fallback** — if backlog is empty, look at the last memory
entry from the Hourly plugin curation cron. Does it surface a follow-up
study worth doing? If yes → file a GH issue labeled `research` and
commit_memory to put the question on the backlog for next tick.
3. **Idle-clean outcome** — if neither backlog nor reflection produced
anything, write "tr-idle HH:MM — clean" to memory and stop. No busy work.
Hard rules enforce: max 1 A2A per tick, skip step 1 if Research Lead busy,
under 60s wall-clock, never re-run a cron's own prompt from inside the idle
loop.
## Rollout plan
- **This PR**: enables TR only via the `idle_prompt` + `idle_interval_seconds`
fields added to its workspace entry in org.yaml.
- **Next 24h**: measure activity_logs delta on TR vs baseline, count
idle-fired delegations vs idle-clean outcomes, confirm Research Lead
isn't being flooded.
- **If green** (delegations land useful work, no flood): roll to Market
Analyst + Competitive Intelligence in a follow-up PR.
- **If noisy** (too many idle fires producing nothing): tune idle_interval
up to 1200-1800s.
## Apply locally per feedback rule
Per `feedback_apply_template_locally_too.md`: not waiting for merge. After
pushing this PR I'll edit TR's live /configs/config.yaml to add the same
idle_prompt + idle_interval_seconds fields, then restart ws-57e13b54-119
(Technical Researcher) so the new workspace-template binary picks up the
idle loop immediately. Measurement clock starts from that restart.
## Related
- #205 (mechanism) — just merged in this cycle (54eb8d7)
- #208 Hermes Phase 1 — also just merged (381a3c8)
- docs/ecosystem-watch.md → `### Hermes Agent` — reflection-on-completion
pattern reference
Closes#211 HIGH ops/security. RunMigrations globbed \`*.sql\` which
matches both \`.up.sql\` AND \`.down.sql\`. Alphabetical sort puts \"d\"
before \"u\", so every platform boot ran the rollback BEFORE the forward
migration for any pair starting with migration 018.
Net effect: every restart wiped workspace_auth_tokens (the 020 pair),
which in turn regressed AdminAuth to its fail-open bootstrap bypass for
every route protected by it — the live server was effectively
unauthenticated from restart until the next workspace re-registered.
Also wiped 018_secrets_encryption_version and 019_workspace_access
pairs silently.
Fix is a 3-line filter: skip files whose base name ends in \`.down.sql\`.
Down migrations remain on disk for operator-driven rollback via psql,
but are never picked up by the auto-run loop.
Added unit test against a tmp dir to lock the filter behaviour so this
can never regress: stages a mix of legacy plain .sql, matched up/down
pairs, asserts only forward files survive.
Follow-up (not in this PR): the runner still re-applies every migration
on every boot. Migrations must be idempotent. A proper schema_migrations
tracking table is tracked as a future cleanup.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Closes#211 HIGH ops/security. RunMigrations globbed \`*.sql\` which
matches both \`.up.sql\` AND \`.down.sql\`. Alphabetical sort puts \"d\"
before \"u\", so every platform boot ran the rollback BEFORE the forward
migration for any pair starting with migration 018.
Net effect: every restart wiped workspace_auth_tokens (the 020 pair),
which in turn regressed AdminAuth to its fail-open bootstrap bypass for
every route protected by it — the live server was effectively
unauthenticated from restart until the next workspace re-registered.
Also wiped 018_secrets_encryption_version and 019_workspace_access
pairs silently.
Fix is a 3-line filter: skip files whose base name ends in \`.down.sql\`.
Down migrations remain on disk for operator-driven rollback via psql,
but are never picked up by the auto-run loop.
Added unit test against a tmp dir to lock the filter behaviour so this
can never regress: stages a mix of legacy plain .sql, matched up/down
pairs, asserts only forward files survive.
Follow-up (not in this PR): the runner still re-applies every migration
on every boot. Migrations must be idempotent. A proper schema_migrations
tracking table is tracked as a future cleanup.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>