Two-Memory Reinforcement Learning

Blind Submission by WorkshopTwo-Memory Reinforcement Learning

Published: 20 Jul 2023, Last Modified: 29 Aug 2023EWRL16Readers: EveryoneShow BibtexShow Revisions
Abstract: While deep reinforcement learning has shown important empirical success, it tends to learn relatively slow due to slow propagation of rewards information and slow update of parametric neural networks. Non-parametric episodic memory, on the other hand, provides a faster learning alternative that does not require representation learning and uses maximum episodic return as state-action values for action selection. Episodic memory and reinforcement learning both have their own strengths and weaknesses. Notably, humans can leverage multiple memory systems concurrently during learning and benefit from all of them. In this work, we propose a method called Two-Memory reinforcement learning agent (2M) that combines episodic memory and reinforcement learning to distill both of their strengths. The 2M agent exploits the speed of the episodic memory part and the optimality and the generalization capacity of the reinforcement learning part to complement each other. Our experiments demonstrate that the 2M agent is more data efficient and outperforms both pure episodic memory and pure reinforcement learning, as well as a state-of-the-art memory-augmented RL agent. Moreover, the proposed approach provides a general framework that can be used to combine any episodic memory agent with other off-policy reinforcement learning algorithms.
Already Accepted Paper At Another Venue: already accepted somewhere else

Reply Type:
Author:
Visible To:
Hidden From:
1 Reply
[–][+]

Paper Decision

Decision by Program ChairsPaper Decision

EWRL 2023 Workshop Program Chairs
19 Jul 2023, 13:02EWRL 2023 Workshop Paper33 DecisionReaders: EveryoneShow Revisions
Decision: Accept