Pre-fix _peer_metadata was an unbounded dict — a workspace receiving
from N distinct peers across its lifetime accumulated entries
indefinitely (~100 bytes × N). Not crash-class at typical scale (10K
peers ≈ 1 MB) but unbounded. The TTL-at-read pattern bounded
staleness but did nothing for memory.
Fix: hand-rolled LRU on top of OrderedDict. No new dependency.
- _PEER_METADATA_MAXSIZE = 1024 (issue's recommended bound)
- _peer_metadata_get(canon) — read + LRU touch (move to MRU)
- _peer_metadata_set(canon, value) — write + evict-if-over-maxsize
- All production reads/writes route through the helpers
- _peer_metadata_lock guards the OrderedDict ops so concurrent
background-enrichment workers (#2484) don't race the LRU
invariant
Why hand-rolled vs cachetools:
- No new dep. workspace/ has 0 cache libraries today; adding one
for ~30 lines is negative leverage.
- The TTL is enforced at the call site (existing pattern); only
the size cap + LRU is new. cachetools.TTLCache fuses the two,
which would force a refactor of every caller's TTL check.
- The size + lock are simple enough that a future swap-in of
cachetools is mechanical if needs evolve.
Why maxsize matters more than ttl (issue's framing):
A runaway poller that touches new peer_ids every push would still
grow within a single TTL window — TTL eviction only fires at
read time. The size cap fires immediately on insert, regardless
of read pattern.
Three new tests:
- test_peer_metadata_set_evicts_lru_when_at_maxsize
- test_peer_metadata_get_promotes_to_lru_head
- test_peer_metadata_set_replaces_existing_entry_in_place
1742 passed / 0 failed locally (78 new + 1664 existing).
Closes#2482.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>