Keywords: continual reinforcement learning, meta learning, multi-task learning
TL;DR: We develop a novel dual learning algorithm that include a fast and meta learner to address continual reinforcement learning algorithm.
Abstract: Inspired by the human learning and memory system, particularly the interplay between the hippocampus and cerebral cortex, this study proposes a dual-learning framework comprising a fast learner and a meta learner to address continual Reinforcement Learning~(RL) problems. These two learners are coupled to perform distinct but complementary roles: the fast learner focuses on knowledge transfer, while the meta learner ensures knowledge integration. Unlike traditional multi-task RL approaches that share knowledge via average return maximization, our meta learner incrementally integrates new experiences by explicitly minimizing catastrophic forgetting, and then transfers accumulated knowledge to a single fast learner. To support rapid adaptation to new environments, we introduce an adaptive meta warm-up mechanism that selectively leverages past knowledge. We perform experiments in the pixel-based benchmark and continuous control problems, revealing the comprehensive performance of continual learning for our proposed dual learning approach relative to baseline methods.
Supplementary Material: zip
Primary Area: Reinforcement learning (e.g., decision and control, planning, hierarchical RL, robotics)
Submission Number: 18845
Loading