Commit Graph

6 Commits

Author SHA1 Message Date
Hongming Wang
b6d223cd0a feat(a2a): per-queue-id status endpoint + per-message TTL (RFC #2331 Tier 1)
Closes the observability gap surfaced in #2329 item 5: callers received
queue_id in the 202 enqueue response but had no public lookup. The only
existing observability path was check_task_status (delegation-flavored
A2A only — joins via request_body->>'delegation_id'). Cross-workspace
peer-direct A2A had no observability after enqueue.

This PR ships RFC #2331's Tier 1: minimum viable observability + caller-
specified TTL. No schema migration — expires_at column already exists
(migration 042); only DequeueNext was honoring it, with no caller path
to populate it.

Two changes:

1. extractExpiresInSeconds(body) — new helper mirroring
   extractIdempotencyKey/extractDelegationIDFromBody. Pulls
   params.expires_in_seconds from the JSON-RPC body. Zero (the unset
   default) preserves today's infinite-TTL semantics. EnqueueA2A grew
   an expiresAt *time.Time parameter; the proxy callsite computes
   *time.Time from the extracted seconds and threads it through to
   the INSERT.

2. GET /workspaces/:id/a2a/queue/:queue_id — new public handler.
   Auth: caller's workspace token must match queue.caller_id OR
   queue.workspace_id, OR be an org-level token. 404 (not 403) on
   auth failure to avoid leaking queue_id existence. Response
   includes status/attempts/last_error/timestamps/expires_at; embeds
   response_body via LEFT JOIN against activity_logs when status=
   completed for delegation-flavored items.

What this does NOT change:
  - Drain semantics (heartbeat-driven dispatch).
  - Native-session bypass (claude-agent-sdk, hermes still skip queue).
  - Schema (column already exists).
  - MCP tools (delegate_task_async / check_task_status keep their
    contract; this is a parallel queue-id surface).

Tests:
  - 7 cases on extractExpiresInSeconds covering absent/positive/
    zero/negative/invalid-JSON/wrong-type/empty-params.
  - go vet + go build clean.
  - Full handlers test suite passes (no regressions from the
    EnqueueA2A signature change — only one production caller).

Tier 2 (cross-workspace stitch + webhook callback) and Tier 3
(controllerized lifecycle) deferred per RFC #2331.
2026-04-29 20:21:17 -07:00
Hongming Wang
d97d7d4768 fix(platform/delegation): classify queued response + stitch drain result back
When proxyA2A returns 202+{queued:true} (target busy → enqueued for drain
on next heartbeat), executeDelegation previously treated it as a successful
completion and ran extractResponseText on the queued JSON. The result was
'Delegation completed (workspace agent busy — request queued, will dispatch...)'
landing in activity_logs.summary, which the LLM then echoed to the user
chat as garbage.

Two fixes:
1. delegation.go: detect queued shape via new isQueuedProxyResponse helper,
   write status='queued' with clean summary 'Delegation queued — target at
   capacity', store delegation_id in response_body so the drain can stitch
   back later. Also embed delegation_id in params.message.metadata + use it
   as messageId so the proxy's idempotency-key path keys off the same id.

2. a2a_queue.go: when DrainQueueForWorkspace successfully drains a queued
   item, extract delegation_id from the body's metadata and UPDATE the
   originating delegate_result row (queued → completed with real
   response_body). Broadcast DELEGATION_COMPLETE so the canvas chat feed
   flips the queued line to completed in real time.

Closes the loop so check_task_status reflects ground truth instead of
perpetual 'queued' even after the queued request eventually drained.

🤖 Generated with [Claude Code](https://claude.com/claude-code)
2026-04-26 10:14:19 -07:00
a1b803ca7a fix(admin/a2a_queue): add drop-stale endpoint for post-incident queue cleanup
Issue #1947: after incidents, PM agents inherit hour-old TASK-priority
queue items from ICs that were correctly reporting "X is broken" while
X was actually broken. Once X is fixed those items are stale noise —
PMs spend ~5 min each writing "thanks, the issue is resolved".

Adds:
- DropStaleQueueItems() in a2a_queue.go: UPDATE ... SET status='dropped'
  for queued items older than maxAgeMinutes. Uses FOR UPDATE SKIP LOCKED
  to stay concurrency-safe with concurrent drain calls.
- AdminQueueHandler in admin_queue.go: POST /admin/a2a-queue/drop-stale
  (AdminAuth, ?max_age_minutes=N, &workspace_id=<id>). Returns {dropped: N}.
- admin_queue_test.go: HTTP-level tests for param validation and response shape.
- Router registration for the new endpoint.

Usage during incident recovery:
  curl -X POST /admin/a2a-queue/drop-stale?max_age_minutes=120
  # scoped to one workspace:
  curl -X POST /admin/a2a-queue/drop-stale?max_age_minutes=120&workspace_id=<uuid>

Closes #1947.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-24 02:08:35 +00:00
Hongming Wang
6faea202b9
fix(a2a-queue): nil-safe drain + 202-requeue handling (followup to #1893) (#1896)
* fix(a2a-queue): nil-safe error extraction in DrainQueueForWorkspace + handle 202-requeue

The drain path called proxyErr.Response["error"].(string) without a comma-
ok assertion. When proxyErr.Response had no "error" key (which happens in
the 202-Accepted-queued branch I added in the same PR — that response is
{"queued": true, "queue_id": ..., "queue_depth": ...}), the type assertion
panicked and killed the platform process.

The platform was down 25 minutes today before this was diagnosed. Fleet
went from 30 real outputs/15min → 0 events.

Two fixes here:

1. Treat 202 Accepted from the inner proxyA2ARequest as "re-queued"
   (target was busy AGAIN). Mark THIS attempt completed; the new queue
   row will be drained on the next heartbeat tick. Don't propagate as
   failure.

2. Defensive type-assertion when reading the error string. Falls back to
   http.StatusText, then a generic "unknown drain dispatch error" so the
   queue still gets a non-empty error_detail for ops debugging.

Now the drain path can never panic on a malformed proxy response.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(a2a-queue): return (202, body, nil) so callers see queued-as-success

Cycle 53 found callers logging 45× 'delegation failed: proxy a2a error'
even though the queue's drain stats showed 48 completions in the same
window. Investigation: my busy-error path returned

  return http.StatusAccepted, nil, &proxyA2AError{Status: 202, Response: ...}

The non-nil proxyA2AError is the failure signal. Even with status=202,
callers' `if proxyErr != nil` branch fires and logs the request as
failed. The 202 status was meaningless — the response body was nil too,
so the caller never even saw the queue_id/depth metadata.

Fix: return success-shape so callers do NOT enter the error branch:

  respBody, _ := json.Marshal(gin.H{"queued": true, "queue_id": qid, ...})
  return http.StatusAccepted, respBody, nil

Net effect: queue continues to absorb busy-errors (working since #1893),
AND callers correctly record the dispatch as queued-success rather than
failed. Closes the cycle 53 misclassification that was making the queue
look ineffective on activity_logs counts.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-authored-by: molecule-ai[bot] <276602405+molecule-ai[bot]@users.noreply.github.com>
2026-04-23 22:55:43 +00:00
rabbitblood
751b265dbd fix(a2a-queue): use partial-index ON CONFLICT syntax (not constraint name)
#1892's EnqueueA2A INSERT used `ON CONFLICT ON CONSTRAINT idx_a2a_queue_idempotency
DO NOTHING`, but Postgres rejects this:

  ERROR: constraint "idx_a2a_queue_idempotency" for table "a2a_queue" does not exist

Partial unique INDEXES cannot be referenced by name in ON CONFLICT — that
form is reserved for true CONSTRAINTs created via CREATE TABLE ... CONSTRAINT
or ALTER TABLE ADD CONSTRAINT. Partial indexes need the column-list +
WHERE form so the planner can match the index.

Effect of the bug: every EnqueueA2A errored, the busy-error fallback
returned 503 instead of 202, queue stayed empty. Cycle 50 observed
46 busy errors / 0 queue rows — the deployed Phase 1 had no effect.

Fix: switch to

  ON CONFLICT (workspace_id, idempotency_key)
    WHERE idempotency_key IS NOT NULL AND status IN ('queued','dispatched')
    DO NOTHING

Verified manually against the live `a2a_queue` table on staging — INSERT
returns the new id; cleanup deleted the test row.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-23 14:22:13 -07:00
rabbitblood
87a97846cd feat(a2a): queue-on-busy — Phase 1 of priority queue (#1870)
## Problem

When a lead delegates to a worker that's mid-synthesis, the proxy returns
503 "workspace agent busy" and the caller records the delegation as
failed. On fan-out storms from leads this hits ~70% drop rate — today's
observed numbers in the cycle reports.

## Fix — Phase 1 TASK-level queue-on-busy

When `handleA2ADispatchError` determines the target is busy, instead of
returning 503, enqueue the request as priority=TASK and return 202
Accepted with `{queued: true, queue_id, queue_depth}`. The workspace's
next heartbeat (≤30s) drains one item if it reports spare capacity.

Files:

  - migrations/042_a2a_queue.{up,down}.sql — `a2a_queue` table with
    partial indexes on status='queued' + idempotency_key. Schema
    supports PriorityCritical/Task/Info from day one so Phase 2/3 ship
    without migration churn.

  - internal/handlers/a2a_queue.go — EnqueueA2A / DequeueNext /
    Mark*-helpers plus WorkspaceHandler.DrainQueueForWorkspace. Uses
    `SELECT ... FOR UPDATE SKIP LOCKED` so concurrent drains can't
    double-claim the same row. Max 5 attempts before marking 'failed'
    so a stuck item doesn't wedge the queue forever.

  - internal/handlers/a2a_proxy_helpers.go — isUpstreamBusyError branch
    calls EnqueueA2A and returns 202 on success. Falls through to the
    legacy 503 on enqueue error (DB hiccup shouldn't silently drop).

  - internal/handlers/registry.go — RegistryHandler gets a QueueDrainFunc
    injection hook (SetQueueDrainFunc). When Heartbeat sees
    active_tasks < max_concurrent_tasks, spawns a goroutine that calls
    the drain hook. context.WithoutCancel ensures the drain outlives
    the heartbeat handler's ctx.

  - internal/router/router.go — wires wh.DrainQueueForWorkspace into
    rh.SetQueueDrainFunc after both are constructed.

## Not in this PR (Phase 2/3/4 follow-ups)

  - INFO priority + TTL (Phase 2)
  - CRITICAL priority + soft preemption between tool calls (Phase 3)
  - Age-based promotion so TASK doesn't starve (Phase 4)
  - `GET /workspaces/:id/queue` observability endpoint

Schema already supports all of these; only the dispatch + policy code
remains.

## Tests

  - TestExtractIdempotencyKey (5 cases): messageId parsing is robust
  - TestPriorityConstants: ordering invariant + 50=TASK default
    alignment with migration DEFAULT

Full DB-touching tests (FIFO order, retry bound, idempotency conflict)
intentionally deferred to the CI migration-enabled path — sqlmock
ceremony would duplicate the existing test infrastructure 3× over and
the behaviour is directly expressible in SQL constraints (FOR UPDATE
SKIP LOCKED, partial unique index).

## Expected impact once deployed

  - a2a_receive error with "busy" flavor drops from ~69/10min observed
    today to ~0
  - delegation_failed rate drops from ~50% to <5%
  - real_output metric rises from ~30/15min back toward the pre-
    throttle baseline

Closes #1870 Phase 1.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-23 14:09:29 -07:00