MIRROR: Complementary Encoding and Reconstructive Consolidation for Persistent State in LLM Systems

Published: 03 Mar 2026, Last Modified: 23 Mar 2026ICLR 2026 Workshop MemAgentsEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Memory Consolidation, Complementary Learning Systems, Cognitive Architecture, Working Memory, AI Safety, Multi-turn Dialogue, Adaptive Forgetting, LLM Agents
TL;DR: A neuroscience-inspired memory architecture that regenerates rather than accumulates internal state achieves 21% improvement in cross-turn state persistence across seven LLMs.
Abstract: LLM-based systems face a fundamental memory consolidation challenge: existing strategies either discard reasoning traces after each turn or accumulate them unboundedly, trading context preservation against error propagation. Complementary Learning Systems theory suggests a third approach: fast encoding of experience paired with slow reconstructive consolidation that regenerates understanding rather than accumulating traces. MIRROR implements this principle. An Inner Monologue Manager maintains parallel working memory threads (Goals, Reasoning, Memory) that rapidly encode turn-specific experience, while a Cognitive Controller consolidates these into a bounded first-person narrative fully regenerated each turn: O(1) reconstructive consolidation rather than O(n) accumulation. Evaluated on CuRaTe, a benchmark testing state persistence under attentional interference, MIRROR achieves 21% relative improvement across seven architectures. Ablation reveals that consolidation alone improves all seven models (+5–20%), while the integrated system outperforms either component alone with synergistic gains of 1–8%—directly validating the CLS prediction that fast encoding and slow consolidation serve complementary functions. Comparison with extended reasoning (+9.3% vs. +2.4%) demonstrates that how experience is consolidated, not merely encoded, determines downstream performance.
Submission Number: 40
Loading