Latent learning: episodic memory complements parametric learning by enabling flexible reuse of experiences

TMLR Paper7344 Authors

04 Feb 2026 (modified: 11 Feb 2026)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: When do machine learning systems fail to generalize, and what mechanisms could improve their generalization? Here, we draw inspiration from cognitive science to argue that one weakness of parametric machine learning systems is their failure to exhibit \emph{latent learning}---learning information that is not relevant to the task at hand, but that might be useful in a future task. Using controlled, synthetic benchmarks, we show how this perspective links failures ranging from the reversal curse in language modeling to new findings on agent-based navigation. We then highlight how cognitive science points to episodic memory as a potential part of the solution to these issues. Correspondingly, we show that a system with an oracle retrieval mechanism can use learning experiences more flexibly to generalize better across many of these challenges---thus motivating episodic memory as an important direction for research in AI. We also identify some of the essential components for effectively using retrieval, including the importance of \emph{within-experience} in-context learning for acquiring the ability to use information \emph{across} retrieved experiences. In summary, our results illustrate one possible contributor to the relative data inefficiency of current machine learning systems compared to natural intelligence, and help to understand how retrieval methods might complement parametric learning to improve generalization. We close by discussing some of the links between our work and findings in cognitive science and neuroscience---including a possible perspective on hippocampal contributions to generalization---and the broader implications.
Submission Type: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Martha_White1
Submission Number: 7344
Loading