Wraps the canvas root so every tenant-subdomain request checks for a
valid session and bounces to app.moleculesai.app/cp/auth/login with a
return_to pointing back at the current URL. Local dev + vercel preview
URLs + apex pass through unchanged.
Files:
- canvas/src/lib/auth.ts: fetchSession() probes /cp/auth/me
(credentials:include for cross-origin cookie); returns Session on 200,
null on 401 (anonymous, no throw), throws on 5xx so transient
outages don't leak the UI.
- canvas/src/lib/auth.ts: redirectToLogin() builds the cp login URL
with window.location.href as return_to; CP's isSafeReturnTo check
rejects cross-domain bounces.
- canvas/src/components/AuthGate.tsx: client component wrapping
children. State machine: loading → authenticated | anonymous. In
non-SaaS mode (no tenant slug) skips the gate entirely.
- canvas/src/app/layout.tsx: wraps the root body in <AuthGate>.
Tests: +6 auth.ts (200 / 401 null / 5xx throw / credentials:include /
redirectToLogin href + signup variant). Full suite 453 green (was 447).
Pairs with molecule-controlplane PR #16 (return_to cookie handshake
on the cp side).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Wraps the canvas root so every tenant-subdomain request checks for a
valid session and bounces to app.moleculesai.app/cp/auth/login with a
return_to pointing back at the current URL. Local dev + vercel preview
URLs + apex pass through unchanged.
Files:
- canvas/src/lib/auth.ts: fetchSession() probes /cp/auth/me
(credentials:include for cross-origin cookie); returns Session on 200,
null on 401 (anonymous, no throw), throws on 5xx so transient
outages don't leak the UI.
- canvas/src/lib/auth.ts: redirectToLogin() builds the cp login URL
with window.location.href as return_to; CP's isSafeReturnTo check
rejects cross-domain bounces.
- canvas/src/components/AuthGate.tsx: client component wrapping
children. State machine: loading → authenticated | anonymous. In
non-SaaS mode (no tenant slug) skips the gate entirely.
- canvas/src/app/layout.tsx: wraps the root body in <AuthGate>.
Tests: +6 auth.ts (200 / 401 null / 5xx throw / credentials:include /
redirectToLogin href + signup variant). Full suite 453 green (was 447).
Pairs with molecule-controlplane PR #16 (return_to cookie handshake
on the cp side).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Yesterday's scheduler-died incident (#85) was one instance of a systemic
bug: every long-running goroutine in the platform lacks panic recovery
and exposes no liveness signal. In a multi-tenant SaaS deployment, a
single tenant's bad data panicking any subsystem takes down the
subsystem for every tenant, silently, with all standard health probes
still green. That is a scale-of-one sev-1.
This PR:
1. Introduces `platform/internal/supervised/` with two primitives:
a. RunWithRecover(ctx, name, fn) — runs fn in a recover wrapper.
On panic logs the stack + exponential-backoff restart (1s → 2s →
4s → … → 30s cap). On clean return (fn decided to stop) returns.
On ctx.Done cancels cleanly.
b. Heartbeat(name) + LastTick(name) + Snapshot() + IsHealthy(names,
staleThreshold) — shared in-memory liveness registry. Every
subsystem calls Heartbeat(name) at the end of each tick so
operators can distinguish "goroutine alive and healthy" from
"alive but stuck inside a single tick".
2. Wraps every `go X.Start(ctx)` in main.go:
- broadcaster.Subscribe (Redis pub/sub relay → WebSocket)
- registry.StartLivenessMonitor
- registry.StartHealthSweep
- scheduler.Start (the one that died yesterday)
- channelMgr.Start (Telegram / Slack)
3. Adds `supervised.Heartbeat("scheduler")` inside the scheduler tick
loop as the first end-to-end demonstration. Follow-up PRs will add
heartbeats to the other four subsystems.
4. Adds `GET /admin/liveness` endpoint returning per-subsystem
last_tick_at + seconds_ago. Operators can poll this and alert on
any subsystem whose seconds_ago exceeds 2x its cron/tick interval.
5. Unit tests for RunWithRecover (clean return no restart; panic
restarts with backoff; ctx cancel stops restart loop) and for the
liveness registry.
Net new code: ~160 lines + ~100 lines of tests. Refactor of main.go:
~10 line changes. No behavior change on happy path; only lifts what
happens on a panic.
Closes#92. Supersedes the local recover added to scheduler.go in
#90 (kept conceptually, but now via the shared helper).
Yesterday's scheduler-died incident (#85) was one instance of a systemic
bug: every long-running goroutine in the platform lacks panic recovery
and exposes no liveness signal. In a multi-tenant SaaS deployment, a
single tenant's bad data panicking any subsystem takes down the
subsystem for every tenant, silently, with all standard health probes
still green. That is a scale-of-one sev-1.
This PR:
1. Introduces `platform/internal/supervised/` with two primitives:
a. RunWithRecover(ctx, name, fn) — runs fn in a recover wrapper.
On panic logs the stack + exponential-backoff restart (1s → 2s →
4s → … → 30s cap). On clean return (fn decided to stop) returns.
On ctx.Done cancels cleanly.
b. Heartbeat(name) + LastTick(name) + Snapshot() + IsHealthy(names,
staleThreshold) — shared in-memory liveness registry. Every
subsystem calls Heartbeat(name) at the end of each tick so
operators can distinguish "goroutine alive and healthy" from
"alive but stuck inside a single tick".
2. Wraps every `go X.Start(ctx)` in main.go:
- broadcaster.Subscribe (Redis pub/sub relay → WebSocket)
- registry.StartLivenessMonitor
- registry.StartHealthSweep
- scheduler.Start (the one that died yesterday)
- channelMgr.Start (Telegram / Slack)
3. Adds `supervised.Heartbeat("scheduler")` inside the scheduler tick
loop as the first end-to-end demonstration. Follow-up PRs will add
heartbeats to the other four subsystems.
4. Adds `GET /admin/liveness` endpoint returning per-subsystem
last_tick_at + seconds_ago. Operators can poll this and alert on
any subsystem whose seconds_ago exceeds 2x its cron/tick interval.
5. Unit tests for RunWithRecover (clean return no restart; panic
restarts with backoff; ctx cancel stops restart loop) and for the
liveness registry.
Net new code: ~160 lines + ~100 lines of tests. Refactor of main.go:
~10 line changes. No behavior change on happy path; only lifts what
happens on a panic.
Closes#92. Supersedes the local recover added to scheduler.go in
#90 (kept conceptually, but now via the shared helper).
A workspace that self-registers with a 127.0.0.x URL on first INSERT
could redirect A2A proxy traffic back to the platform itself (SSRF).
The previous fix only blocked 169.254.0.0/16 (cloud metadata).
Add 127.0.0.0/8 to validateAgentURL's blocklist. RFC-1918 private
ranges (10.x, 172.16.x, 192.168.x) remain allowed — Docker container
networking depends on them.
Safe because the provisioner writes 127.0.0.1 URLs via direct SQL
UPDATE, not through /registry/register, so the UPSERT CASE that
preserves provisioner URLs is unaffected. Local-dev agents can still
register using "localhost" by name (hostname, not IP literal).
Tests: removed "valid localhost http" case (now correctly rejected),
added "valid localhost name" + three loopback-block assertions.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
A workspace that self-registers with a 127.0.0.x URL on first INSERT
could redirect A2A proxy traffic back to the platform itself (SSRF).
The previous fix only blocked 169.254.0.0/16 (cloud metadata).
Add 127.0.0.0/8 to validateAgentURL's blocklist. RFC-1918 private
ranges (10.x, 172.16.x, 192.168.x) remain allowed — Docker container
networking depends on them.
Safe because the provisioner writes 127.0.0.1 URLs via direct SQL
UPDATE, not through /registry/register, so the UPSERT CASE that
preserves provisioner URLs is unaffected. Local-dev agents can still
register using "localhost" by name (hostname, not IP literal).
Tests: removed "valid localhost http" case (now correctly rejected),
added "valid localhost name" + three loopback-block assertions.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Canvas will be served at <slug>.moleculesai.app (Vercel). API calls go
cross-origin to https://app.moleculesai.app. This commit wires the
client side:
- canvas/src/lib/tenant.ts: getTenantSlug() derives the slug from
window.location.hostname, case-insensitive, matching the control
plane's reservedSubdomains list (app/www/api/admin/…). Server-side
+ localhost + vercel preview URLs + apex all return "" so local dev
keeps working.
- canvas/src/lib/api.ts: adds X-Molecule-Org-Slug header + sets
credentials:"include" on every fetch. The control plane's CORS
middleware allows the origin + credentials; the session cookie has
Domain=.moleculesai.app so the browser ships it.
- canvas/src/lib/api/secrets.ts: same treatment (secrets API uses its
own fetch helper — shared slug+credentials logic applied).
Tests: +6 (tenant.test.ts covers slug / reserved / case / non-SaaS /
preview URL / apex). Full canvas suite 447/447 green.
Not in this PR:
- WS URL derivation for terminal/socket.ts (separate follow-up; WS
needs its own slug-aware URL and the canvas terminal isn't used in
SaaS launch day-one).
- Next.js rewrites (decided against; cross-origin with credentials
is cleaner than path-level rewrites for session cookies).
Deploys to Vercel once merged — no manual config needed (env already set).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Canvas will be served at <slug>.moleculesai.app (Vercel). API calls go
cross-origin to https://app.moleculesai.app. This commit wires the
client side:
- canvas/src/lib/tenant.ts: getTenantSlug() derives the slug from
window.location.hostname, case-insensitive, matching the control
plane's reservedSubdomains list (app/www/api/admin/…). Server-side
+ localhost + vercel preview URLs + apex all return "" so local dev
keeps working.
- canvas/src/lib/api.ts: adds X-Molecule-Org-Slug header + sets
credentials:"include" on every fetch. The control plane's CORS
middleware allows the origin + credentials; the session cookie has
Domain=.moleculesai.app so the browser ships it.
- canvas/src/lib/api/secrets.ts: same treatment (secrets API uses its
own fetch helper — shared slug+credentials logic applied).
Tests: +6 (tenant.test.ts covers slug / reserved / case / non-SaaS /
preview URL / apex). Full canvas suite 447/447 green.
Not in this PR:
- WS URL derivation for terminal/socket.ts (separate follow-up; WS
needs its own slug-aware URL and the canvas terminal isn't used in
SaaS launch day-one).
- Next.js rewrites (decided against; cross-origin with credentials
is cleaner than path-level rewrites for session cookies).
Deploys to Vercel once merged — no manual config needed (env already set).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The scheduler died silently on 2026-04-14 14:21 UTC and stayed dead for
12+ hours. Platform restart didn't recover it. Root cause: tick() and
fireSchedule() goroutines have no panic recovery. A single bad row, bad
cron expression, DB blip, or transient panic anywhere in the chain
permanently kills the scheduler goroutine — and the only signal to an
operator is "no crons firing", which is invisible if you're not watching.
Specifically:
func (s *Scheduler) Start(ctx context.Context) {
for {
select {
case <-ticker.C:
s.tick(ctx) // <- if this panics, the for-loop exits forever
}
}
}
And inside tick:
go func(s2 scheduleRow) {
defer wg.Done()
defer func() { <-sem }()
s.fireSchedule(ctx, s2) // <- panic here propagates up wg.Wait()
}(sched)
Two `defer recover()` additions:
1. In Start's tick wrapper — a panic in tick() (DB scan, cron parse,
row processing) is logged and the next tick fires normally.
2. In each fireSchedule goroutine — a single bad workspace can't take
the rest of the batch down.
Plus a liveness watchdog:
- Scheduler now records `lastTickAt` after each successful tick.
- New methods `LastTickAt()` and `Healthy()` (true if last tick within
2× pollInterval = 60s).
- Initialised at Start so Healthy() returns true on a fresh process.
Endpoint plumbing for /admin/scheduler/health is a follow-up — needs
threading the scheduler instance through router.Setup(). Documented
on #85.
Closes the silent-outage failure mode of #85. The other proposed
fixes (force-kill on /restart hang, active_tasks watchdog) are
separate concerns tracked in #85's comments.
The scheduler died silently on 2026-04-14 14:21 UTC and stayed dead for
12+ hours. Platform restart didn't recover it. Root cause: tick() and
fireSchedule() goroutines have no panic recovery. A single bad row, bad
cron expression, DB blip, or transient panic anywhere in the chain
permanently kills the scheduler goroutine — and the only signal to an
operator is "no crons firing", which is invisible if you're not watching.
Specifically:
func (s *Scheduler) Start(ctx context.Context) {
for {
select {
case <-ticker.C:
s.tick(ctx) // <- if this panics, the for-loop exits forever
}
}
}
And inside tick:
go func(s2 scheduleRow) {
defer wg.Done()
defer func() { <-sem }()
s.fireSchedule(ctx, s2) // <- panic here propagates up wg.Wait()
}(sched)
Two `defer recover()` additions:
1. In Start's tick wrapper — a panic in tick() (DB scan, cron parse,
row processing) is logged and the next tick fires normally.
2. In each fireSchedule goroutine — a single bad workspace can't take
the rest of the batch down.
Plus a liveness watchdog:
- Scheduler now records `lastTickAt` after each successful tick.
- New methods `LastTickAt()` and `Healthy()` (true if last tick within
2× pollInterval = 60s).
- Initialised at Start so Healthy() returns true on a fresh process.
Endpoint plumbing for /admin/scheduler/health is a follow-up — needs
threading the scheduler instance through router.Setup(). Documented
on #85.
Closes the silent-outage failure mode of #85. The other proposed
fixes (force-kill on /restart hang, active_tasks watchdog) are
separate concerns tracked in #85's comments.
Point-in-time snapshot of the live SaaS infrastructure + which phases
are done vs in-flight vs not started. Links to molecule-controlplane's
own PLAN for deeper operator detail.
Point-in-time snapshot of the live SaaS infrastructure + which phases
are done vs in-flight vs not started. Links to molecule-controlplane's
own PLAN for deeper operator detail.
Pair to molecule-controlplane PR #8. Fly's proxy returns 502 if the
fly-replay state value contains '=', so the control plane now puts the
bare UUID in state= (no 'org-id=' prefix). TenantGuard now treats the
whole 'state=...' value as the org id.
Pair to molecule-controlplane PR #8. Fly's proxy returns 502 if the
fly-replay state value contains '=', so the control plane now puts the
bare UUID in state= (no 'org-id=' prefix). TenantGuard now treats the
whole 'state=...' value as the org id.
Today's crons are all REVIEW (Security audit, UIUX audit, QA tests). Nothing
actively pushes the team to EVOLVE the four levers CEO named: templates,
plugins, channels, watchlist. The team-runs-24/7 goal needs both — defensive
reviews AND offensive evolution.
Adds 4 new schedules:
1. Research Lead — Daily ecosystem watch (0 8 * * *)
Survey github.com/trending + HN + AI-blogs for new agent-infra projects
from the last 24h. Add 1-3 entries to docs/ecosystem-watch.md per day,
commit to chore/eco-watch-YYYY-MM-DD branch + push + PR. Re-enables
the watchlist pipeline that was paused earlier today.
2. Technical Researcher — Weekly plugin curation (0 9 * * 1, Mondays)
Inventory plugins/ + builtin_tools/ + recent landings. Identify gaps
(builtin not exposed as plugin; role missing extras; rarely-used plugin
in defaults). Survey upstream (claude.ai cookbook, MCP servers,
anthropic/openai/langchain blogs). File 1-3 plugin proposals per week
as GH issues with concrete integration sketches.
3. Dev Lead — Daily template fitness audit (30 8 * * *)
Health-check the template itself: stale system prompts, schedules not
firing (catches the #85 scheduler-died failure mode), roles missing
plugins they should have, missing crons, channel gaps. File issues for
any drift. Designed to catch the silent-stall pattern from today's
incident.
4. DevOps Engineer — Weekly channel expansion survey (0 10 * * 1, Mondays)
PM is the only role with a channel today (Telegram). Survey what
channel infra the platform supports + what role-channel pairings would
actually help (Security→email-on-critical, DevOps→Slack-on-CI-break,
etc). File channel-proposal issues.
All four crons end with the structured audit_summary routing per #51/#75
(category, severity, issues, top_recommendation) so they integrate with
the platform-level category_routing PM uses to fan out work. The template's
existing category_routing block already maps research / plugins / template /
channels — these new crons consume exactly those slots.
Also drops three stale "# UNION with defaults (#71)" comments left from
the cleanup PR — those plugins lists are now self-documenting after #71.
Aligns with north-star goal: team should run 24/7 AND keep getting better
across templates / plugins / channels / watchlist. This PR closes the gap
where the "review" half of the loop was running but the "evolve" half had
no active driver.
Today's crons are all REVIEW (Security audit, UIUX audit, QA tests). Nothing
actively pushes the team to EVOLVE the four levers CEO named: templates,
plugins, channels, watchlist. The team-runs-24/7 goal needs both — defensive
reviews AND offensive evolution.
Adds 4 new schedules:
1. Research Lead — Daily ecosystem watch (0 8 * * *)
Survey github.com/trending + HN + AI-blogs for new agent-infra projects
from the last 24h. Add 1-3 entries to docs/ecosystem-watch.md per day,
commit to chore/eco-watch-YYYY-MM-DD branch + push + PR. Re-enables
the watchlist pipeline that was paused earlier today.
2. Technical Researcher — Weekly plugin curation (0 9 * * 1, Mondays)
Inventory plugins/ + builtin_tools/ + recent landings. Identify gaps
(builtin not exposed as plugin; role missing extras; rarely-used plugin
in defaults). Survey upstream (claude.ai cookbook, MCP servers,
anthropic/openai/langchain blogs). File 1-3 plugin proposals per week
as GH issues with concrete integration sketches.
3. Dev Lead — Daily template fitness audit (30 8 * * *)
Health-check the template itself: stale system prompts, schedules not
firing (catches the #85 scheduler-died failure mode), roles missing
plugins they should have, missing crons, channel gaps. File issues for
any drift. Designed to catch the silent-stall pattern from today's
incident.
4. DevOps Engineer — Weekly channel expansion survey (0 10 * * 1, Mondays)
PM is the only role with a channel today (Telegram). Survey what
channel infra the platform supports + what role-channel pairings would
actually help (Security→email-on-critical, DevOps→Slack-on-CI-break,
etc). File channel-proposal issues.
All four crons end with the structured audit_summary routing per #51/#75
(category, severity, issues, top_recommendation) so they integrate with
the platform-level category_routing PM uses to fan out work. The template's
existing category_routing block already maps research / plugins / template /
channels — these new crons consume exactly those slots.
Also drops three stale "# UNION with defaults (#71)" comments left from
the cleanup PR — those plugins lists are now self-documenting after #71.
Aligns with north-star goal: team should run 24/7 AND keep getting better
across templates / plugins / channels / watchlist. This PR closes the gap
where the "review" half of the loop was running but the "evolve" half had
no active driver.
Header implied the whole system was future work, but the section body
says the core (per-runtime adapters, hybrid resolver, AgentskillsAdaptor,
/plugins filter, SDK, agentskills.io spec compliance) all landed. Only
the bullets under 'Deferred, not blocking' are actually open.
Rename + lead with 'The system is done.' so a skim reader doesn't
misfile the whole topic as unshipped.
Header implied the whole system was future work, but the section body
says the core (per-runtime adapters, hybrid resolver, AgentskillsAdaptor,
/plugins filter, SDK, agentskills.io spec compliance) all landed. Only
the bullets under 'Deferred, not blocking' are actually open.
Rename + lead with 'The system is done.' so a skim reader doesn't
misfile the whole topic as unshipped.
Phase B.3 pair-fix to the control plane's fly-replay state change.
Background: the private molecule-controlplane's router emits
`fly-replay: app=X;instance=Y;state=org-id=<uuid>`. Fly's edge replays
the request to the tenant and injects `Fly-Replay-Src: instance=Z;...;
state=org-id=<uuid>` on the replayed request. But response headers from
the cp (like X-Molecule-Org-Id) never travel to the replayed tenant —
only the state= param does.
TenantGuard now checks both paths in order:
1. Primary: X-Molecule-Org-Id header (direct-access path, e.g. molecli)
2. Secondary: Fly-Replay-Src's `state=org-id=<uuid>` segment
(production fly-replay path)
Either matching configured MOLECULE_ORG_ID → allow. Neither matches →
404 (still don't leak tenant existence).
New helper orgIDFromReplaySrc parses the semicolon-separated Fly-Replay-
Src header per Fly's format. Covered by a table-driven test with 7 cases
including malformed + empty-header + wrong-state-key.
Tests: +3 new TestTenantGuard_* (FlyReplaySrc match, mismatch, table).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Phase B.3 pair-fix to the control plane's fly-replay state change.
Background: the private molecule-controlplane's router emits
`fly-replay: app=X;instance=Y;state=org-id=<uuid>`. Fly's edge replays
the request to the tenant and injects `Fly-Replay-Src: instance=Z;...;
state=org-id=<uuid>` on the replayed request. But response headers from
the cp (like X-Molecule-Org-Id) never travel to the replayed tenant —
only the state= param does.
TenantGuard now checks both paths in order:
1. Primary: X-Molecule-Org-Id header (direct-access path, e.g. molecli)
2. Secondary: Fly-Replay-Src's `state=org-id=<uuid>` segment
(production fly-replay path)
Either matching configured MOLECULE_ORG_ID → allow. Neither matches →
404 (still don't leak tenant existence).
New helper orgIDFromReplaySrc parses the semicolon-separated Fly-Replay-
Src header per Fly's format. Covered by a table-driven test with 7 cases
including malformed + empty-header + wrong-state-key.
Tests: +3 new TestTenantGuard_* (FlyReplaySrc match, mismatch, table).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Post-mortem on the failed publish-platform-image run on main (PR #82):
Fly's Docker registry requires username EXACTLY equal to "x". My
code-review "readability fix" changing it to "molecule-ai" caused
every push to return 401 Unauthorized. Verified locally:
echo $FLY_API_TOKEN | docker login registry.fly.io -u x --password-stdin
→ Login Succeeded
echo $FLY_API_TOKEN | docker login registry.fly.io -u molecule-ai --password-stdin
→ 401 Unauthorized
Lesson: don't second-guess docs that specify a literal value. Comment
now says "MUST be literal 'x'" with a 2026-04-15 verification note to
prevent future regressions.
Code-review process improvement: when reviewing a change against a
vendor API, prefer "preserve exact doc-specified values" over readability
suggestions. Logged as a cron-learning.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Post-mortem on the failed publish-platform-image run on main (PR #82):
Fly's Docker registry requires username EXACTLY equal to "x". My
code-review "readability fix" changing it to "molecule-ai" caused
every push to return 401 Unauthorized. Verified locally:
echo $FLY_API_TOKEN | docker login registry.fly.io -u x --password-stdin
→ Login Succeeded
echo $FLY_API_TOKEN | docker login registry.fly.io -u molecule-ai --password-stdin
→ 401 Unauthorized
Lesson: don't second-guess docs that specify a literal value. Comment
now says "MUST be literal 'x'" with a 2026-04-15 verification note to
prevent future regressions.
Code-review process improvement: when reviewing a change against a
vendor API, prefer "preserve exact doc-specified values" over readability
suggestions. Logged as a cron-learning.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Addresses PR #82 code review: 🟡×3 + 🔵×5.
- Fly registry login username: 'x' → 'molecule-ai' + explanatory comment.
- Build & push split into two steps (GHCR / Fly registry) so a single-
registry outage can't fail the other. Second step uses 'if: always()'
to ensure Fly mirror runs even if GHCR push flakes.
- docs/runbooks/saas-secrets.md: full secret map + rotation procedures
for every SaaS credential, with danger-case callouts. Documents the
coupled FLY_API_TOKEN (lives in GHA secret AND fly secrets — must be
rotated in both).
- CLAUDE.md: new 'SaaS ops' section linking to the runbook.
Addresses PR #82 code review: 🟡×3 + 🔵×5.
- Fly registry login username: 'x' → 'molecule-ai' + explanatory comment.
- Build & push split into two steps (GHCR / Fly registry) so a single-
registry outage can't fail the other. Second step uses 'if: always()'
to ensure Fly mirror runs even if GHCR push flakes.
- docs/runbooks/saas-secrets.md: full secret map + rotation procedures
for every SaaS credential, with danger-case callouts. Documents the
coupled FLY_API_TOKEN (lives in GHA secret AND fly secrets — must be
rotated in both).
- CLAUDE.md: new 'SaaS ops' section linking to the runbook.
Keeps ghcr.io/molecule-ai/platform private (per CEO direction — open-
source when full SaaS ships) while still letting the private control
plane's Fly provisioner boot tenant machines: Fly auto-authenticates
same-org machines against registry.fly.io, no per-tenant pull
credentials to wire.
Workflow now logs into both GHCR (using built-in GITHUB_TOKEN) and
Fly registry (using FLY_API_TOKEN secret) and pushes the same image to
four tags total:
- ghcr.io/molecule-ai/platform:latest
- ghcr.io/molecule-ai/platform:sha-<short>
- registry.fly.io/molecule-tenant:latest
- registry.fly.io/molecule-tenant:sha-<short>
Secret added via `gh secret set FLY_API_TOKEN` on the public repo.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Keeps ghcr.io/molecule-ai/platform private (per CEO direction — open-
source when full SaaS ships) while still letting the private control
plane's Fly provisioner boot tenant machines: Fly auto-authenticates
same-org machines against registry.fly.io, no per-tenant pull
credentials to wire.
Workflow now logs into both GHCR (using built-in GITHUB_TOKEN) and
Fly registry (using FLY_API_TOKEN secret) and pushes the same image to
four tags total:
- ghcr.io/molecule-ai/platform:latest
- ghcr.io/molecule-ai/platform:sha-<short>
- registry.fly.io/molecule-tenant:latest
- registry.fly.io/molecule-tenant:sha-<short>
Secret added via `gh secret set FLY_API_TOKEN` on the public repo.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>