Closes#246Closes#247
Critical security findings and CI build-break alerts are now pushed via Telegram instead of waiting for someone to manually check memory/logs.
Backend Engineer and Frontend Engineer were missing molecule-skill-llm-judge
while Dev Lead, QA Engineer, and Security Auditor already have it.
llm-judge lets engineers self-gate their PR against the issue body before
requesting review, catching 'shipped the wrong thing' before Dev Lead sees it.
No new plugins needed — already installed org-wide.
Closes#310
Backend Engineer and Frontend Engineer were missing molecule-skill-llm-judge
while Dev Lead, QA Engineer, and Security Auditor already have it.
llm-judge lets engineers self-gate their PR against the issue body before
requesting review, catching 'shipped the wrong thing' before Dev Lead sees it.
No new plugins needed — already installed org-wide.
Closes#310
Closes#287
Any container on molecule-monorepo-net could previously read the full Claude session log without authentication. Guard uses get_token() from platform_auth — skipped only before workspace registration (dev-mode).
Closes#287
Any container on molecule-monorepo-net could previously read the full Claude session log without authentication. Guard uses get_token() from platform_auth — skipped only before workspace registration (dev-mode).
Closes#306. The cron expression was "5,20,35,50 * * * *" (every 15
min = 96 ticks/day) despite the schedule being named "Hourly UI/UX
audit". Each tick launches Chromium, takes 8 screenshots, runs them
through Claude vision, and delegates to PM — 768 vision calls/day
from one workspace with no meaningful delta between ticks (canvas UI
only changes on deploys).
Changed to "5 * * * *" (hourly, at :05 past the hour). 6x reduction
in cost + noise.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Closes#306. The cron expression was "5,20,35,50 * * * *" (every 15
min = 96 ticks/day) despite the schedule being named "Hourly UI/UX
audit". Each tick launches Chromium, takes 8 screenshots, runs them
through Claude vision, and delegates to PM — 768 vision calls/day
from one workspace with no meaningful delta between ticks (canvas UI
only changes on deploys).
Changed to "5 * * * *" (hourly, at :05 past the hour). 6x reduction
in cost + noise.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Closes#303. Surfaces CVE/secret scanning at dev time instead of
waiting for the Security Auditor's 12h cron. Backend Engineer's
plugin list: [molecule-hitl, molecule-skill-code-review,
molecule-security-scan].
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Closes#303. Surfaces CVE/secret scanning at dev time instead of
waiting for the Security Auditor's 12h cron. Backend Engineer's
plugin list: [molecule-hitl, molecule-skill-code-review,
molecule-security-scan].
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds auth_headers to recall_memory and commit_memory in a2a_tools.py. Fixes the #215-class auth regression for A2A memory tools. Test mocks updated to accept headers kwarg.
Adds auth_headers to recall_memory and commit_memory in a2a_tools.py. Fixes the #215-class auth regression for A2A memory tools. Test mocks updated to accept headers kwarg.
One-liner oversight from #295: the macOS install path wrote the plist
with the default umask (~0644), leaving CDP_PROXY_TOKEN world-readable
to any local user account. The Linux path already writes to a chmod
600 env-file — this brings macOS to parity.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
One-liner oversight from #295: the macOS install path wrote the plist
with the default umask (~0644), leaving CDP_PROXY_TOKEN world-readable
to any local user account. The Linux path already writes to a chmod
600 env-file — this brings macOS to parity.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
HIGH finding from security-auditor on PR #291 (merged tick-37). The
cdp-proxy bound to 0.0.0.0:9223 with no authentication, exposing
Chrome DevTools Protocol — full remote control of any tab, including
cookie/localStorage exfiltration — to anyone on the same WiFi/LAN.
Root cause: Docker Desktop on macOS routes host.docker.internal
through the VM network interface, not loopback. Binding to 127.0.0.1
would break the primary use case (containers reaching the host
Chrome). The design trade was "bind wide for reachability, accept LAN
exposure" — #293 makes that trade unacceptable.
Fix: bearer token auth on every HTTP + WebSocket request. The proxy
REFUSES TO START without a token — no unauth mode.
Three-file change:
1. cdp-proxy.cjs
- Read token from CDP_PROXY_TOKEN env OR ~/.molecule-cdp-proxy-token
- Fail loudly if neither is set (exit 1 with install-host-bridge.sh
pointer)
- Validate X-CDP-Proxy-Token header via crypto.timingSafeEqual on
every HTTP request AND every WS upgrade
- Strip the header before forwarding to Chrome (defense in depth —
token never leaks into Chrome's request log)
2. install-host-bridge.sh
- New ensure_token() function generates a 64-char hex token via
openssl rand -hex 32 (fallback to /dev/urandom). Written to
~/.molecule-cdp-proxy-token with chmod 600.
- macOS: token injected into launchd plist EnvironmentVariables
- Linux: written to ~/.molecule-cdp-proxy.env (chmod 600) and
referenced via systemd EnvironmentFile — avoids embedding the
token in the often world-readable unit file
- Install reuses existing token if present (16+ chars); uninstall
preserves token file so a reinstall keeps the same token
- Verify command now includes the token header
- Documents container-side bind-mount pattern
(-v ~/.molecule-cdp-proxy-token:/run/secrets/cdp-proxy-token:ro)
3. lib/connect.js
- New loadProxyToken() with precedence: env var >
/run/secrets/cdp-proxy-token > ~/.molecule-cdp-proxy-token
- Attaches X-CDP-Proxy-Token header on both /json/version probe +
final puppeteer.connect() call via headers: {} option
(puppeteer-core v21+ supports this natively)
- Host-direct fallback (CDP port 9222 on loopback) unchanged —
Chrome's own port is loopback-only so it doesn't need the token
Attack surface now:
- LAN attacker must also steal the token file from the user's home
directory (requires shell access) OR the env var (requires
launchd/systemd process inspection as the same user) — reduces to
local-privilege-escalation territory
- Containers on the same Docker network still have access (they
mount the token by design) — intentional, any workspace-template
install already runs inside the platform's trust boundary
Not fixing in this PR:
- Rate limiting on /json/version (low priority — probe-and-mine is
expensive even without)
- IP allowlist on top of token auth (diminishing returns)
- Rotating the token periodically (user can rm ~/.molecule-cdp-proxy-token
and reinstall)
Closes#293.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
HIGH finding from security-auditor on PR #291 (merged tick-37). The
cdp-proxy bound to 0.0.0.0:9223 with no authentication, exposing
Chrome DevTools Protocol — full remote control of any tab, including
cookie/localStorage exfiltration — to anyone on the same WiFi/LAN.
Root cause: Docker Desktop on macOS routes host.docker.internal
through the VM network interface, not loopback. Binding to 127.0.0.1
would break the primary use case (containers reaching the host
Chrome). The design trade was "bind wide for reachability, accept LAN
exposure" — #293 makes that trade unacceptable.
Fix: bearer token auth on every HTTP + WebSocket request. The proxy
REFUSES TO START without a token — no unauth mode.
Three-file change:
1. cdp-proxy.cjs
- Read token from CDP_PROXY_TOKEN env OR ~/.molecule-cdp-proxy-token
- Fail loudly if neither is set (exit 1 with install-host-bridge.sh
pointer)
- Validate X-CDP-Proxy-Token header via crypto.timingSafeEqual on
every HTTP request AND every WS upgrade
- Strip the header before forwarding to Chrome (defense in depth —
token never leaks into Chrome's request log)
2. install-host-bridge.sh
- New ensure_token() function generates a 64-char hex token via
openssl rand -hex 32 (fallback to /dev/urandom). Written to
~/.molecule-cdp-proxy-token with chmod 600.
- macOS: token injected into launchd plist EnvironmentVariables
- Linux: written to ~/.molecule-cdp-proxy.env (chmod 600) and
referenced via systemd EnvironmentFile — avoids embedding the
token in the often world-readable unit file
- Install reuses existing token if present (16+ chars); uninstall
preserves token file so a reinstall keeps the same token
- Verify command now includes the token header
- Documents container-side bind-mount pattern
(-v ~/.molecule-cdp-proxy-token:/run/secrets/cdp-proxy-token:ro)
3. lib/connect.js
- New loadProxyToken() with precedence: env var >
/run/secrets/cdp-proxy-token > ~/.molecule-cdp-proxy-token
- Attaches X-CDP-Proxy-Token header on both /json/version probe +
final puppeteer.connect() call via headers: {} option
(puppeteer-core v21+ supports this natively)
- Host-direct fallback (CDP port 9222 on loopback) unchanged —
Chrome's own port is loopback-only so it doesn't need the token
Attack surface now:
- LAN attacker must also steal the token file from the user's home
directory (requires shell access) OR the env var (requires
launchd/systemd process inspection as the same user) — reduces to
local-privilege-escalation territory
- Containers on the same Docker network still have access (they
mount the token by design) — intentional, any workspace-template
install already runs inside the platform's trust boundary
Not fixing in this PR:
- Rate limiting on /json/version (low priority — probe-and-mine is
expensive even without)
- IP allowlist on top of token auth (diminishing returns)
- Rotating the token periodically (user can rm ~/.molecule-cdp-proxy-token
and reinstall)
Closes#293.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Issue surfaced in SEO Builder Run 10 (2026-04-15):
- Marketing Leader found 2 code-level metadata blockers
(white-rock page.tsx override + en.json description >160c)
- Telegram report listed them under "⚠️ ACTION ITEMS (human)"
- User: "it should automatically report to dev team instead of
just asking CEO to do it"
Fix: when seo-builder finds a code-level blocker it can't fix via
DB, it delegates to the Dev Leader sibling workspace via A2A instead
of flagging for human. Only genuine human actions (Yelp email
verification, Google account-linked operations) stay in the human
bucket.
Also clarify marketing-leader/CLAUDE.md so the "DO NOT DELEGATE"
rule doesn't accidentally block this pattern — it's now explicit
that sibling handoff for scope mismatches is allowed (as opposed
to delegating down the hierarchy to spawn sub-agents, which stays
forbidden).
Issue surfaced in SEO Builder Run 10 (2026-04-15):
- Marketing Leader found 2 code-level metadata blockers
(white-rock page.tsx override + en.json description >160c)
- Telegram report listed them under "⚠️ ACTION ITEMS (human)"
- User: "it should automatically report to dev team instead of
just asking CEO to do it"
Fix: when seo-builder finds a code-level blocker it can't fix via
DB, it delegates to the Dev Leader sibling workspace via A2A instead
of flagging for human. Only genuine human actions (Yelp email
verification, Google account-linked operations) stay in the human
bucket.
Also clarify marketing-leader/CLAUDE.md so the "DO NOT DELEGATE"
rule doesn't accidentally block this pattern — it's now explicit
that sibling handoff for scope mismatches is allowed (as opposed
to delegating down the hierarchy to spawn sub-agents, which stays
forbidden).
`TranscriptHandler.Get` previously proxied `agent_card->>'url'` directly
to the outbound HTTP client with no validation. Since `agent_card` is
attacker-writable via /registry/register, a workspace-token holder
could point it at cloud metadata (169.254.169.254), link-local ranges,
or non-http schemes and pivot the platform container against internal
services (IMDS, Redis, Postgres, other containers on the Docker net).
Four required fixes per reviewer:
1. `validateWorkspaceURL(u *url.URL)` — runs before `httpClient.Do`:
- scheme must be http/https (rejects file://, gopher://, ftp://)
- cloud metadata hostname blocklist (GCP + Azure + plain "metadata")
- IMDS IP blocklist (169.254.169.254)
- IPv4/IPv6 link-local blocklist (169.254/16, fe80::/10, multicast)
- IPv6 unique-local fd00::/8 blocklist
- loopback + docker.internal still allowed for local dev
2. Query-param allowlist — `target.RawQuery = c.Request.URL.RawQuery`
forwarded everything verbatim, letting a caller smuggle params the
upstream transcript endpoint didn't intend to expose. Replaced with
an allowlist of `since` and `limit`.
3. Sanitized error string — `fmt.Sprintf("workspace unreachable: %v", err)`
leaked the actual internal host/IP via `net.OpError`. Now logs the
real error server-side and returns a plain "workspace unreachable"
to the caller.
4. 10 new regression test cases:
- `TestTranscript_Rejects{CloudMetadataIP,NonHTTPScheme,MetadataHostname,LinkLocalIPv6}`
exercise the handler end-to-end with each attack URL and assert
400 before the HTTP client fires.
- `TestValidateWorkspaceURL` table-drives the validator across
localhost/public/docker-internal (allowed) + IMDS/GCP/Azure/file/
gopher/link-local/multicast (rejected).
- `TestTranscript_ProxyPropagatesAllowlistedQueryParams` asserts
`secret=leak&cmd=rm` is stripped while `since=42&limit=7` pass
through.
Also fixed a pre-existing test bug: `seedWorkspace` was issuing a real
SQL Exec against sqlmock with no expectation set, so the prior test
helpers silently failed in CI. Replaced with `expectWorkspaceURLLookup`
which programs the mock correctly. All 11 tests now pass.
Closes#272
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
`TranscriptHandler.Get` previously proxied `agent_card->>'url'` directly
to the outbound HTTP client with no validation. Since `agent_card` is
attacker-writable via /registry/register, a workspace-token holder
could point it at cloud metadata (169.254.169.254), link-local ranges,
or non-http schemes and pivot the platform container against internal
services (IMDS, Redis, Postgres, other containers on the Docker net).
Four required fixes per reviewer:
1. `validateWorkspaceURL(u *url.URL)` — runs before `httpClient.Do`:
- scheme must be http/https (rejects file://, gopher://, ftp://)
- cloud metadata hostname blocklist (GCP + Azure + plain "metadata")
- IMDS IP blocklist (169.254.169.254)
- IPv4/IPv6 link-local blocklist (169.254/16, fe80::/10, multicast)
- IPv6 unique-local fd00::/8 blocklist
- loopback + docker.internal still allowed for local dev
2. Query-param allowlist — `target.RawQuery = c.Request.URL.RawQuery`
forwarded everything verbatim, letting a caller smuggle params the
upstream transcript endpoint didn't intend to expose. Replaced with
an allowlist of `since` and `limit`.
3. Sanitized error string — `fmt.Sprintf("workspace unreachable: %v", err)`
leaked the actual internal host/IP via `net.OpError`. Now logs the
real error server-side and returns a plain "workspace unreachable"
to the caller.
4. 10 new regression test cases:
- `TestTranscript_Rejects{CloudMetadataIP,NonHTTPScheme,MetadataHostname,LinkLocalIPv6}`
exercise the handler end-to-end with each attack URL and assert
400 before the HTTP client fires.
- `TestValidateWorkspaceURL` table-drives the validator across
localhost/public/docker-internal (allowed) + IMDS/GCP/Azure/file/
gopher/link-local/multicast (rejected).
- `TestTranscript_ProxyPropagatesAllowlistedQueryParams` asserts
`secret=leak&cmd=rm` is stripped while `since=42&limit=7` pass
through.
Also fixed a pre-existing test bug: `seedWorkspace` was issuing a real
SQL Exec against sqlmock with no expectation set, so the prior test
helpers silently failed in CI. Replaced with `expectWorkspaceURLLookup`
which programs the mock correctly. All 11 tests now pass.
Closes#272
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add a new `social-publish` skill under the Marketing Leader template
containing verbatim copies of 7 puppeteer-core helper scripts that reliably
publish video posts to Facebook, Instagram, X, LinkedIn, TikTok, YouTube,
and Google Business Profile. Each helper encapsulates hours of debugging
from the 2026-04-15 incident (Lexical editor mirror selection, FB Reel
Next-button disambiguation, post-publish upsell dismissal, TikTok
beforeunload race, GBP iframe scoping, etc).
Rewrite the existing social-media-poster / monitor / engage skills to
delegate publishing to these helpers instead of freestyling puppeteer
per run. Mirror the same delegation note into the social-media-specialist
skill copies so both the Marketing Leader and its specialist agent follow
the same rule.
Not implemented as a platform plugin: the helpers are dom-specific to
Reno Stars Chrome sessions (profile path, account IDs, hardcoded URLs)
and belong in org-template content rather than a generic platform
capability.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Add a new `social-publish` skill under the Marketing Leader template
containing verbatim copies of 7 puppeteer-core helper scripts that reliably
publish video posts to Facebook, Instagram, X, LinkedIn, TikTok, YouTube,
and Google Business Profile. Each helper encapsulates hours of debugging
from the 2026-04-15 incident (Lexical editor mirror selection, FB Reel
Next-button disambiguation, post-publish upsell dismissal, TikTok
beforeunload race, GBP iframe scoping, etc).
Rewrite the existing social-media-poster / monitor / engage skills to
delegate publishing to these helpers instead of freestyling puppeteer
per run. Mirror the same delegation note into the social-media-specialist
skill copies so both the Marketing Leader and its specialist agent follow
the same rule.
Not implemented as a platform plugin: the helpers are dom-specific to
Reno Stars Chrome sessions (profile path, account IDs, hardcoded URLs)
and belong in org-template content rather than a generic platform
capability.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The plugin now ships everything a user needs to wire Chrome on their
host to workspaces inside Docker:
- host-bridge/cdp-proxy.cjs — rewrites the Host header so Chrome accepts
DevTools Protocol connections from container-originated traffic, and
forwards both HTTP (tab list, screenshots) and WebSocket upgrades.
- host-bridge/install-host-bridge.sh — one-command install on macOS
(launchd user agent) or Linux (systemd --user unit). `uninstall`
subcommand cleans up. No root required.
- skills/browser-automation/lib/connect.js — the mandatory helper
consumers already use; re-exported here so the plugin is self-contained.
- SKILL.md — documents the one-time host setup and the existing
defaultViewport:null + disconnect-not-close rules. The 2026-04-15
social-media-poster incident (3h debug chasing phantom "sessions
expired" errors on an 800x600 viewport) is captured inline.
Smoke-tested on macOS: install script registered the agent, proxy
listens on 0.0.0.0:9223, and a live workspace container
(ws-bee4d521-3d3) successfully reached Chrome via
host.docker.internal:9223.
This replaces ad-hoc per-user CDP proxies and makes the plugin
usable by any Molecule operator, not just the Reno Stars org.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The plugin now ships everything a user needs to wire Chrome on their
host to workspaces inside Docker:
- host-bridge/cdp-proxy.cjs — rewrites the Host header so Chrome accepts
DevTools Protocol connections from container-originated traffic, and
forwards both HTTP (tab list, screenshots) and WebSocket upgrades.
- host-bridge/install-host-bridge.sh — one-command install on macOS
(launchd user agent) or Linux (systemd --user unit). `uninstall`
subcommand cleans up. No root required.
- skills/browser-automation/lib/connect.js — the mandatory helper
consumers already use; re-exported here so the plugin is self-contained.
- SKILL.md — documents the one-time host setup and the existing
defaultViewport:null + disconnect-not-close rules. The 2026-04-15
social-media-poster incident (3h debug chasing phantom "sessions
expired" errors on an 800x600 viewport) is captured inline.
Smoke-tested on macOS: install script registered the agent, proxy
listens on 0.0.0.0:9223, and a live workspace container
(ws-bee4d521-3d3) successfully reached Chrome via
host.docker.internal:9223.
This replaces ad-hoc per-user CDP proxies and makes the plugin
usable by any Molecule operator, not just the Reno Stars org.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The #215-class fix in memory.py (859a60e) adds headers=_headers to the
direct-httpx commit_memory + search_memory paths, but 9 existing tests
in test_memory.py had FakeAsyncClient.post/get signatures like
`async def post(self, url, json):` with no headers kwarg. Python
raised TypeError: unexpected keyword argument 'headers' on every call,
commit_memory caught it and returned {success: False}, tests failed.
Fixes applied:
1. Add `headers=None` to every FakeAsyncClient.post + .get signature
across test_memory.py. Uses replace_all so all 9+ fakes match.
2. For tests that capture a single captured["url"]:
- test_commit_memory_uses_awareness_client_when_configured
- test_commit_memory_uses_platform_fallback_without_awareness
- test_commit_memory_httpx_201_success
filter to only capture /memories URLs. Without the filter, the
subsequent _record_memory_activity fire-and-forget post to /activity
overwrites captured["url"] and the assertion fails.
3. For test_commit_memory_promoted_packet_logs_skill_promotion: bump
expected captured["calls"] from 3 to 4. Pre-fix, the memory_write
/activity call (from _record_memory_activity #125) was silently
dropped because the fake rejected headers=; post-fix it succeeds
and lands in the captured list alongside the skill_promotion
/activity and /registry/heartbeat calls. Also extend that test's
fake to accept /registry/heartbeat (was raising AssertionError).
Total: 36/36 memory tests pass. Full workspace-template suite 1189/1189.
This is strictly test-infrastructure work — zero production code
changed. CI never caught the break because the Mac mini runner has
been stuck for ~4 hours (tick-33/34/35/36 reports).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The #215-class fix in memory.py (859a60e) adds headers=_headers to the
direct-httpx commit_memory + search_memory paths, but 9 existing tests
in test_memory.py had FakeAsyncClient.post/get signatures like
`async def post(self, url, json):` with no headers kwarg. Python
raised TypeError: unexpected keyword argument 'headers' on every call,
commit_memory caught it and returned {success: False}, tests failed.
Fixes applied:
1. Add `headers=None` to every FakeAsyncClient.post + .get signature
across test_memory.py. Uses replace_all so all 9+ fakes match.
2. For tests that capture a single captured["url"]:
- test_commit_memory_uses_awareness_client_when_configured
- test_commit_memory_uses_platform_fallback_without_awareness
- test_commit_memory_httpx_201_success
filter to only capture /memories URLs. Without the filter, the
subsequent _record_memory_activity fire-and-forget post to /activity
overwrites captured["url"] and the assertion fails.
3. For test_commit_memory_promoted_packet_logs_skill_promotion: bump
expected captured["calls"] from 3 to 4. Pre-fix, the memory_write
/activity call (from _record_memory_activity #125) was silently
dropped because the fake rejected headers=; post-fix it succeeds
and lands in the captured list alongside the skill_promotion
/activity and /registry/heartbeat calls. Also extend that test's
fake to accept /registry/heartbeat (was raising AssertionError).
Total: 36/36 memory tests pass. Full workspace-template suite 1189/1189.
This is strictly test-infrastructure work — zero production code
changed. CI never caught the break because the Mac mini runner has
been stuck for ~4 hours (tick-33/34/35/36 reports).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Context: platform now gates `GET /workspaces/:id/memories` and
`POST /workspaces/:id/memories` behind workspace auth (post-#166 /
#167 AdminAuth wave). The `builtin_tools.memory` tool had three HTTP
call sites:
1. commit_memory POST fallback (line 121) ← NO auth_headers
2. search_memory GET fallback (line 269) ← NO auth_headers
3. activity-log helper POST (line 371) ← HAS auth_headers
Path 3 was already fixed. Paths 1 + 2 silently 401 every call, but the
tool's error-handling path returns `{"success": False}` without surfacing
the auth failure to the agent. Result: the agent sees an empty memory
backlog on every call and assumes there's nothing to do.
## Discovered today
Technical Researcher is the first workspace opted in to the idle-loop
pilot from #216 (reflection-on-completion pattern). The pilot fires
every 10 min, the agent calls `search_memory "research-backlog:..."` as
the first step, gets back an empty result, writes "tr-idle clean" to
memory, and stops. Clean-idle outcome every tick, 9 consecutive ticks.
Looking at TR's activity_logs response bodies:
"Memory auth has failed on every tick this session — skipping the call"
"tr-idle — step 2 done. Memory unavailable (auth token missing..."
"tr-idle 04:15 — clean (memory auth still down, 3rd consecutive tick)"
The AGENT knew the memory calls were failing. The platform 401 error
was surfacing in the tool response, but our instrumentation wasn't
counting it as a defect — we saw "tr-idle clean" writes and assumed
the pilot was working as designed. It was actually silently broken.
## Fix
Import `platform_auth.auth_headers` lazily (same pattern as the
activity-log path already uses), attach `headers=_auth()` to both
httpx call sites. Matches the #225 fix for the register call.
## Not in this PR
- awareness_client.py also makes HTTP calls to a separate AWARENESS_URL
service (not the platform), which may or may not need the same fix
depending on that service's auth posture. Out of scope for this PR.
- TR's specific token problem: TR's `/configs/.auth_token` file is
empty because it was re-provisioned via `apply_template: true`
(recovery path from the failed-volume incident) and Phase 30.1
only mints a token on FIRST register per workspace. This fix
doesn't help TR until TR gets a fresh token — tracked separately.
## Test plan
- [x] Python syntax check on memory.py passes
- [ ] CI: all memory-related tests should still pass (the new code
paths only add header passing, no shape change)
- [ ] Real-world verification: after TR gets a fresh token, idle-loop
pilot should produce a dispatch within 10 min (seeded backlog
already in place from this session)
## Related
- #215 / #225 — register call auth_headers fix (same pattern)
- #216 — TR idle-loop pilot (couldn't measure until this lands)
- #166 / #167 — platform AdminAuth wave that surfaced this gap