Epistemic Memory Failures in Long-Form Narrative Agents: A Deployment Study
Keywords: LLM agents, memory architecture, narrative generation, epistemic state, known-information forgetting, retrieval-augmented generation, long-form generation
TL;DR: We identify "known-information forgetting" as a distinct memory failure in narrative agents and propose Key Facts Injection, reducing incidents by 73%.
Abstract: We report findings from deploying an LLM-based narrative agent across 90 chapters of novel generation (180,000+ tokens over 3 months). We identify a previously under-discussed failure mode: known-information forgetting---where characters redundantly ask about or rediscover facts they already learned in previous chapters. This failure is distinct from hallucination (facts are correct) and world-state inconsistency (world is consistent); rather, it reflects a mismatch between world state and character epistemic state. We trace the root cause to naive recency-based context injection, which systematically excludes mid-chapter key facts. We propose Key Facts Injection: extracting semantically important facts from episodic memory and injecting them with explicit "already knows" markers. This simple intervention reduced known-information forgetting by 73% in our deployment. We share our memory architecture design (episodic/semantic/working memory) and lessons learned, hoping to inform future agent memory systems where tracking what characters know matters as much as what happened.
Submission Number: 33
Loading