On The Role of Forgetting in Fine-Tuning Reinforcement Learning ModelsDownload PDF

Published: 03 Mar 2023, Last Modified: 20 Apr 2023RRL 2023 PosterReaders: Everyone
Keywords: reinforcement learning, continual learning, catastrophic forgetting, fine-tuning
TL;DR: We show that catastrophic forgetting might occur during the fine-tuning of reinforcement learning models and we use tools from continual learning to fix that.
Abstract: Recently, foundation models have achieved remarkable results in fields such as computer vision and language processing. Although there has been a significant push to introduce similar approaches in reinforcement learning, these have not yet succeeded on a comparable scale. In this paper, we take a step towards understanding and closing this gap by highlighting one of the problems specific to foundation RL models, namely the data shift occurring during fine-tuning. We show that fine-tuning on compositional tasks, where parts of the environment might only be available after a long training period, is inherently prone to catastrophic forgetting. In such a scenario, a pre-trained model might forget useful knowledge before even seeing parts of the state space it can solve. We provide examples of both a grid world and realistic robotic scenarios where catastrophic forgetting occurs. Finally, we show how this problem can be mitigated by using tools from continual learning. We discuss the potential impact of this finding and propose further research directions.
Track: Technical Paper
Confirmation: I have read and agree with the workshop's policy on behalf of myself and my co-authors.
2 Replies

Loading