The post-fire UPDATE after s.proxy.ProxyA2ARequest() was using fireCtx,
which derives from the outer ctx passed into fireSchedule(). If that ctx
is cancelled — HTTP timeout, graceful shutdown, or any upstream deadline —
ExecContext returns context.Canceled and the UPDATE is silently skipped,
leaving next_run_at stale and causing the schedule to re-fire on the
next tick.
Fix: create a dedicated updateCtx from context.Background() with a 5s
deadline, independent of the outer ctx hierarchy. Also improved the
error log to include schedule name for easier debugging.
Complements PR #1241 (fix/f1089-scheduler-ctx-fix-main) which fixes
the goroutine-panic path in tick() — this fix covers the wider case of
normal-return + ctx-cancelled after the proxy call.
F1089 | Severity: HIGH+security
Co-authored-by: Molecule AI Infra Lead <infra-lead@agents.moleculesai.app>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(auth): F1094 — requireCallerOwnsOrg reads org_id not created_by (#1200)
Root cause: requireCallerOwnsOrg (org_plugin_allowlist.go:116) was
reading org_api_tokens.created_by to determine caller's org workspace
ID. But created_by is a provenance label ("session", "admin-token",
"org-token:<prefix>") — never a UUID. The equality check
callerOrg != targetOrgID always failed → every org-token caller
got 403 on /orgs/:id/plugins/allowlist routes.
Fix:
- Migration 036: adds org_id UUID column (nullable) to org_api_tokens
with index. Existing pre-migration tokens get org_id=NULL → deny
by default (safer than cross-org access).
- orgtoken.Issue: takes new orgID param; stores in org_id column.
- orgtoken.OrgIDByTokenID: new helper reads org_id for a token ID.
Returns ("", nil) for NULL/unanchored tokens.
- requireCallerOwnsOrg: now calls OrgIDByTokenID instead of reading
created_by. Pre-migration tokens with org_id=NULL get callerOrg=""
→ denied (safer).
- orgTokenActor (org_tokens.go): returns (createdBy, orgID) pair.
Token minted via another org token gets its org_id set at mint time.
Session/ADMIN_TOKEN callers get orgID="".
- orgtoken.Token struct: adds OrgID field for list display.
- orgtoken.List: selects org_id alongside other columns.
- Updated existing tests for new Issue signature.
- Added 10 regression tests covering: happy path, unanchored denial,
cross-org denial, session bypass, DB error denial.
🤖 Generated with [Claude Code](https://claude.ai/claude-code)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(security): replace err.Error() leaks with prod-safe messages (#1206)
- workspace_provision.go: provisionWorkspace, provisionWorkspaceCP —
replaced 7 err.Error() calls with "provisioning failed" in both
Broadcast payloads and last_sample_error DB column. Full error
preserved in server-side log.Printf.
- plugins_install_pipeline.go: resolveAndStage — replaced 5 err.Error()
calls with generic messages:
"invalid plugin source"
"plugin source not supported"
"invalid plugin name"
"staged plugin exceeds size limit"
"plugin manifest integrity check failed"
Risk mitigated: DB errors (pq: connection refused, pq: deadlock),
OS errors, and internal paths no longer leak in HTTP JSON responses
or WebSocket broadcasts.
Added regression tests (workspace_provision_test.go):
- TestProvisionWorkspace_NoInternalErrorsInBroadcast
- TestProvisionWorkspaceCP_NoInternalErrorsInBroadcast
- TestResolveAndStage_NoInternalErrorsInHTTPErr
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(F1089): log panic-recovery UPDATE errors in scheduler
The panic defer blocks in tick() and fireSchedule() now capture
and log errors from the db.DB.ExecContext call that advances next_run_at
after a panic. Previously, a DB failure during panic recovery was
silent — the log line for the panic itself appeared but any subsequent
UPDATE failure was invisible, risking unnoticed scheduler drift.
context.Background() was already used (F1089 comment in place); this
commit adds the missing error capture + log.Printf on exec failure.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Molecule AI Dev Lead <dev-lead@agents.moleculesai.app>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
F1089: PR #1032's panic-recovery defers used the outer `ctx` passed into
fireSchedule/tick. If that ctx was cancelled during the panic window
(HTTP timeout, graceful shutdown), ExecContext returned early and the
next_run_at UPDATE was silently skipped — leaving the schedule stuck.
Fix: both panic defers now call ExecContext(context.Background()) so the
recovery UPDATE is independent of the outer ctx's lifecycle.
Refs: #1201 (F1089, security audit 2026-04-21)
Co-authored-by: Molecule AI CP-BE <cp-be@agents.moleculesai.app>
- TestRecordSkipped_AdvancesNextRunAt: call recordSkipped directly instead
of going through fireSchedule, which now has a 2-min deferral loop (#969)
that makes sqlmock-based end-to-end testing impractical.
- TestFireSchedule_NormalSuccess_AdvancesNextRunAt: add missing expectation
for the consecutive_empty_runs reset query (#795) that fires on non-empty
successful responses.
- TestFireSchedule_ComputeNextRunError: same consecutive_empty_runs fix.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When fireSchedule panics before reaching the next_run_at UPDATE,
the deferred recover catches the panic but never advances next_run_at,
leaving it stuck in the past forever. The schedule then fires every
tick (30s) in an infinite retry loop.
Add next_run_at advancement to both panic recovery defers (the
per-goroutine one in tick() and the inner one in fireSchedule()) so
the schedule always moves forward regardless of how the fire exits.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
CP-QA approved. Panic recovery in fireSchedule now advances next_run_at via ComputeNextRun + ExecContext, preventing a panicking cron from indefinitely starving all other schedules. 3 new tests: TestPanicRecovery_AdvancesNextRunAt, TestFireSchedule_NormalSuccess, TestRecordSkipped_AdvancesNextRunAt. Fixes#1029.
Previously, the scheduler skipped cron fires entirely when a workspace
had active_tasks > 0 (#115). This caused permanent cron misses for
workspaces kept perpetually busy by the 5-min Orchestrator pulse — work
crons (pick-up-work, PR review) were skipped every fire because the
agent was always processing a delegation.
Measured impact on Dev Lead: 17 context-deadline-exceeded timeouts in
2 hours, ~30% of inter-agent messages silently dropped.
Fix: when workspace is busy, poll every 10s for up to 2 minutes waiting
for idle. If idle within the window, fire normally. If still busy after
2 min, fall back to the original skip behavior.
This is a minimal, safe change:
- No new goroutines or channels
- Same fire path once idle
- Bounded wait (2 min max, won't block the scheduler pool)
- Falls back to skip if workspace never becomes idle
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The phantom-producer detector (#795) was doing UPDATE + SELECT in two
roundtrips — first incrementing consecutive_empty_runs, then re-
reading to check the stale threshold. Switch to UPDATE ... RETURNING
so the post-increment value comes back in one query.
Called once per schedule per cron tick. At 100 tenants × dozens of
schedules per tenant, the halved DB traffic on the empty-response
path is measurable, not just cosmetic.
Also now properly logs if the bump itself fails (previously it silent-
swallowed the ExecContext error and still ran the SELECT, which would
confuse debugging).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>