Continual Reinforcement Learning using Feature Map Attention LossDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Withdrawn SubmissionReaders: Everyone
Keywords: continual learning, reinforcement learning
Abstract: Despite the growing interest in continual learning (CL) in recent years, continual reinforcement learning (CRL) has still been a challenging task because deep neural networks must infer appropriate action from each never-seen observation from new tasks sustaining old tasks' performance. To solve this problem, some CRL algorithms used both regularization-based methods constraining the weights and replay-based method used in the conventional CL. However, it costs a lot of time to learn because it required a considerable amount of memory in terms of replay-based and have complex regularization terms. In this paper, we propose a simple framework for preserving knowledge FMAL among related sequential tasks, namely Feature Map Attention Loss. Our method leverages general CNNs of a model that can perform all of the sequential tasks well, and the attention mechanism is used to extract essential features to transfer. Also, FMAL uses both the regularization method and the replay-based method, like the existing CRL method. However, the amount of memory required for learning is much smaller, and the term of regularization is relatively simple. We evaluate FMAL with the state of the art algorithms. The experiment results show that our method outperforms these baselines with higher rewards.
One-sentence Summary: Our method leverages general CNNs of a model that can perform all of the sequential tasks well, and the attention mechanism is used to extract essential features to transfer.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Reviewed Version (pdf): https://openreview.net/references/pdf?id=csiNAk7CvlW
1 Reply

Loading