molecule-core/platform/internal/supervised/supervised.go
rabbitblood e4535560cf fix(platform): panic-recovering supervisor for every background goroutine (#92)
Yesterday's scheduler-died incident (#85) was one instance of a systemic
bug: every long-running goroutine in the platform lacks panic recovery
and exposes no liveness signal. In a multi-tenant SaaS deployment, a
single tenant's bad data panicking any subsystem takes down the
subsystem for every tenant, silently, with all standard health probes
still green. That is a scale-of-one sev-1.

This PR:

1. Introduces `platform/internal/supervised/` with two primitives:

   a. RunWithRecover(ctx, name, fn) — runs fn in a recover wrapper.
      On panic logs the stack + exponential-backoff restart (1s → 2s →
      4s → … → 30s cap). On clean return (fn decided to stop) returns.
      On ctx.Done cancels cleanly.

   b. Heartbeat(name) + LastTick(name) + Snapshot() + IsHealthy(names,
      staleThreshold) — shared in-memory liveness registry. Every
      subsystem calls Heartbeat(name) at the end of each tick so
      operators can distinguish "goroutine alive and healthy" from
      "alive but stuck inside a single tick".

2. Wraps every `go X.Start(ctx)` in main.go:
   - broadcaster.Subscribe   (Redis pub/sub relay → WebSocket)
   - registry.StartLivenessMonitor
   - registry.StartHealthSweep
   - scheduler.Start         (the one that died yesterday)
   - channelMgr.Start        (Telegram / Slack)

3. Adds `supervised.Heartbeat("scheduler")` inside the scheduler tick
   loop as the first end-to-end demonstration. Follow-up PRs will add
   heartbeats to the other four subsystems.

4. Adds `GET /admin/liveness` endpoint returning per-subsystem
   last_tick_at + seconds_ago. Operators can poll this and alert on
   any subsystem whose seconds_ago exceeds 2x its cron/tick interval.

5. Unit tests for RunWithRecover (clean return no restart; panic
   restarts with backoff; ctx cancel stops restart loop) and for the
   liveness registry.

Net new code: ~160 lines + ~100 lines of tests. Refactor of main.go:
~10 line changes. No behavior change on happy path; only lifts what
happens on a panic.

Closes #92. Supersedes the local recover added to scheduler.go in
#90 (kept conceptually, but now via the shared helper).
2026-04-14 20:34:18 -07:00

143 lines
4.2 KiB
Go

// Package supervised provides a panic-recovering supervisor for long-running
// background goroutines on the platform. Every "go X.Start(ctx)" invocation
// in main.go should go through [RunWithRecover] so a single panic from one
// tenant's data cannot silently kill a subsystem that serves every tenant.
//
// Incident that motivated this (issue #85, 2026-04-14):
//
// The cron scheduler goroutine died silently at 14:21 UTC and stayed dead
// for 12+ hours. Platform restart didn't recover it. Root cause: no
// defer recover() in the tick loop. Observable signals (HTTP 200, container
// healthy, DB healthy) all stayed green — only the subsystem was dead.
//
// In a multi-tenant SaaS deployment the blast radius is every tenant
// simultaneously, which is exactly the class of failure we cannot afford.
package supervised
import (
"context"
"log"
"runtime/debug"
"sync"
"time"
)
// Default backoff bounds for RunWithRecover restarts.
const (
initialBackoff = 1 * time.Second
maxBackoff = 30 * time.Second
)
// RunWithRecover runs fn in a recover wrapper. If fn panics, the panic is
// logged with its stack trace and fn is restarted after an exponential
// backoff (capped at maxBackoff). The loop exits cleanly when ctx is done.
//
// fn is expected to be a long-running loop (e.g. "for { select { ticker ... } }").
// If fn returns without panicking (e.g. ctx.Done), RunWithRecover returns.
//
// go supervised.RunWithRecover(ctx, "scheduler", func(c context.Context) {
// scheduler.Start(c)
// })
//
// name is used in log lines and by the liveness registry below.
func RunWithRecover(ctx context.Context, name string, fn func(context.Context)) {
backoff := initialBackoff
for {
select {
case <-ctx.Done():
log.Printf("supervised[%s]: context done; stopping", name)
return
default:
}
panicked := runOnce(ctx, name, fn)
// Clean return → the goroutine decided to stop (likely ctx.Done inside fn).
// Don't restart.
if !panicked {
log.Printf("supervised[%s]: returned cleanly; not restarting", name)
return
}
// Panic → back off and restart.
select {
case <-ctx.Done():
return
case <-time.After(backoff):
}
if backoff < maxBackoff {
backoff *= 2
if backoff > maxBackoff {
backoff = maxBackoff
}
}
}
}
// runOnce invokes fn with recover. Returns true iff fn panicked.
func runOnce(ctx context.Context, name string, fn func(context.Context)) (panicked bool) {
defer func() {
if r := recover(); r != nil {
panicked = true
log.Printf("supervised[%s]: PANIC recovered: %v\n%s", name, r, debug.Stack())
}
}()
fn(ctx)
return false
}
// --- Liveness registry -----------------------------------------------------
//
// Each subsystem calls Heartbeat(name) at the end of each tick / iteration.
// Operators read the registry via /admin/liveness to detect stuck-but-not-
// crashed subsystems (e.g. a tick that deadlocks without panicking).
var (
livenessMu sync.RWMutex
lastTicks = map[string]time.Time{}
)
// Heartbeat records that subsystem `name` is alive as of now.
func Heartbeat(name string) {
livenessMu.Lock()
lastTicks[name] = time.Now()
livenessMu.Unlock()
}
// LastTick returns the wall-clock time of the most recent Heartbeat for
// subsystem `name`. Returns the zero time if the subsystem has never
// heartbeated.
func LastTick(name string) time.Time {
livenessMu.RLock()
defer livenessMu.RUnlock()
return lastTicks[name]
}
// Snapshot returns a copy of every subsystem's last-tick time, for admin
// endpoints.
func Snapshot() map[string]time.Time {
livenessMu.RLock()
defer livenessMu.RUnlock()
out := make(map[string]time.Time, len(lastTicks))
for k, v := range lastTicks {
out[k] = v
}
return out
}
// IsHealthy returns true iff every subsystem in `expected` has tickled
// within `staleThreshold` ago. Use from /health (or a strict variant of it)
// to surface stuck subsystems to an external orchestrator.
func IsHealthy(expected []string, staleThreshold time.Duration) (healthy bool, stale []string) {
livenessMu.RLock()
defer livenessMu.RUnlock()
now := time.Now()
for _, name := range expected {
last, ok := lastTicks[name]
if !ok || now.Sub(last) > staleThreshold {
stale = append(stale, name)
}
}
return len(stale) == 0, stale
}