Knowledge Retention in Continual Model-Based Reinforcement Learning

Published: 09 Oct 2024, Last Modified: 02 Dec 2024NeurIPS 2024 Workshop IMOL PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Full track
Keywords: Continual learning, Model-based reinforcement learning, intrinsic motivation, catastrophic forgetting
TL;DR: DRAGO is an algorithm for continual model-based reinforcement learning, aiming to address catastrophic forgetting and improve the incremental development of world models across a sequence of tasks.
Abstract: We propose DRAGO, a novel approach for continual model-based reinforcement learning aimed at improving the incremental development of world models across a sequence of tasks that differ in their reward functions but not the state space or dynamics. DRAGO comprises two key components: $\textit{Synthetic Experience Rehearsal}$, which leverages generative models to create synthetic experiences from past tasks, allowing the agent to reinforce previously learned dynamics without storing data, and $\textit{Regaining Memories Through Exploration}$, which introduces an intrinsic reward mechanism to guide the agent toward revisiting relevant states from prior tasks. Together, these components enable the agent to maintain a comprehensive and continually developing world model, facilitating more effective learning and adaptation across diverse environments. Empirical evaluations demonstrate that DRAGO is able to preserve knowledge across tasks, achieving superior performance in various continual learning scenarios.
Submission Number: 55
Loading