Re:Frame - Retrieving Experience From Associative Memory
Track: long paper (up to 5 pages)
Keywords: Associative memory, Reinforcement Learning, Memory, POMDP, Transformer
TL;DR: Re:Frame - a framework that augments RL agents with an Associative Memory Buffer that stores experiences required to support decision-making in memory-intensive tasks.
Abstract: Transformers have demonstrated strong performance in offline reinforcement learning (RL) for Markovian tasks, thanks to their ability to efficiently process historical information. However, in partially observable environments, where agents must rely on past experiences to make decisions in the present, transformers are limited by their fixed context window and struggle to capture long-term dependencies. Extending this window indefinitely is not feasible due to the quadratic complexity of the attention mechanism. This limitation has inspired us to look for alternative ways to improve memory handling. In neurobiology, associative memory allows the brain to link different stimuli by activating neurons simultaneously, creating associations between experiences that occurred around the same time. Motivated by this biological concept, we introduce Re:Frame (Retrieving Experience From Associative Memory), a novel RL algorithm that enables agents to better utilize their past experiences. Re:Frame incorporates a long-term memory mechanism that enhances decision-making in complex tasks by seamlessly integrating past and present information.
Submission Number: 33
Loading