forked from molecule-ai/molecule-core
The 600-req/min/IP bucket is sized for SaaS where each tenant has a distinct client IP. On a local Docker setup every panel shares one IP — hydration (/workspaces + /templates + /org/templates + /approvals/pending) plus polling (A2A overlay + activity tabs + approvals + schedule + channels + audit trail) can burst past the bucket inside a minute, blanking the canvas with 429s. The user reported it after dragging workspaces — dragging itself is release-only (savePosition in onNodeDragStop), but the polling that's always running added onto startup tripped the limit. Two-layer fix: Server: RateLimiter.Middleware short-circuits when isDevModeFailOpen is true (MOLECULE_ENV=development + empty ADMIN_TOKEN), matching the Tier-1b hatch already applied to AdminAuth, WorkspaceAuth, and discovery. SaaS production keeps the bucket. Client: api.ts auto-retries a single 429 on idempotent GET requests, waiting the server-provided Retry-After (capped at 20s). Mutations (POST/PUT/PATCH/DELETE) never auto-retry to avoid double-applying. Users on SaaS hitting a legitimate rate-limit spike get one transparent recovery instead of an immediately-blank Canvas. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> |
||
|---|---|---|
| .. | ||
| devmode_test.go | ||
| devmode.go | ||
| mcp_ratelimit_test.go | ||
| mcp_ratelimit.go | ||
| ratelimit_test.go | ||
| ratelimit.go | ||
| securityheaders_test.go | ||
| securityheaders.go | ||
| session_auth_test.go | ||
| session_auth.go | ||
| tenant_guard_test.go | ||
| tenant_guard.go | ||
| wsauth_middleware_org_id_test.go | ||
| wsauth_middleware_test.go | ||
| wsauth_middleware.go | ||