Adversarial or buggy agents can report INT64_MAX token counts via A2A
responses. Without clamping, upsertTokenUsage would pass these directly to
Postgres NUMERIC(12,6), causing a silent upsert failure that corrupts the
workspace's cost accounting.
Fix: clamp input_tokens/output_tokens to [0, 10_000_000] before any
arithmetic or DB write. 10M tokens/call is well above any real LLM API
response; clamped values still produce valid cost rows.
Adds 4 regression tests:
- TestUpsertTokenUsage_615_CapsInt64Max — INT64_MAX → maxTokensPerCall
- TestUpsertTokenUsage_615_CapsNegative — negative → 0 (no DB call)
- TestUpsertTokenUsage_615_NormalValuesUnchanged — passthrough for normal counts
- TestUpsertTokenUsage_615_ExactlyAtCap — at-cap value accepted unchanged
Closes#615
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>