Re:Frame - Retrieving Experience From Associative Memory

Published: 05 Mar 2025, Last Modified: 20 Apr 2025NFAM 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: long paper (up to 5 pages)
Keywords: Associative memory, Reinforcement Learning, Memory, POMDP, Transformer
TL;DR: Re:Frame - a framework that augments RL agents with an Associative Memory Buffer that stores experiences required to support decision-making in memory-intensive tasks.
Abstract: Transformers have demonstrated strong performance in offline reinforcement learning (RL) for Markovian tasks, due to their ability to process historical information efficiently. However, in partially observable environments, where agents must rely on past experiences to make decisions in the present, transformers are limited by their fixed context window and struggle to capture long-term dependencies. Extending this window indefinitely is not feasible due to the quadratic complexity of the attention mechanism. This limitation led us to explore other memory handling approaches. In neurobiology, associative memory allows the brain to link different stimuli by activating neurons simultaneously, creating associations between experiences that occurred around the same time. Motivated by this concept, we introduce **Re:Frame** (**R**etrieving **E**xperience **Fr**om **A**ssociative **Me**mory), a novel RL algorithm that enables agents to better utilize their past experiences. Re:Frame incorporates a long-term memory mechanism that enhances decision-making in complex tasks by integrating past and present information.
Submission Number: 33
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview