Keywords: associative memory, agent memory, interference, pattern separation, Hopfield networks, retrieval-augmented generation, lifelong learning
TL;DR: We bridge associative memory theory and LLM agent memory, showing that interference under domain shift is concentrated and reducible through training-free AM-inspired interventions.
Abstract: Memory-augmented LLM agents accumulate experience across lifelong deployment, yet under domain shift previously helpful memories can interfere with current reasoning. We hypothesize that such interference follows dynamics predicted by associative memory (AM) theory: concentration in a minority of items, cross-domain competition at retrieval boundaries, and reducibility through pattern separation. We introduce Associative Interference-aware Memory (AIM), a training-free mechanism combining sparse encoding, per-item interference tracking, and adaptive gating. Through controlled streaming experiments under domain shift, we find that interference concentrates as AM theory predicts, that sparse encoding substantially reduces cross-domain interference while preserving task accuracy, and that AIM achieves the highest accuracy under memory load among all tested systems. Follow-up experiments on a different model reveal that the interference ledger, rather than sparse encoding, is the primary operative component, and that the base model's tolerance for extraneous context is itself domain-dependent. These findings establish an empirical bridge between AM theory and practical agent memory, and suggest that principled interference management deserves the same attention as retrieval optimization in deployed agents.
Submission Number: 55
Loading