Continual Learning via Adaptive Neuron SelectionDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: continual learning, knowledge transfer, neural network, neuron selection, deep learning
TL;DR: This paper presents a novel continual learning solution with adaptive neuron selection.
Abstract: Continual learning (CL) aims at learning a sequence of tasks without losing previously acquired knowledge. Early efforts have achieved promising results in overcoming the catastrophic forgetting problem. As a consequence, contemporary studies turn to investigate whether learning a sequence of tasks can be facilitated from the perspective of knowledge consolidation. However, existing solutions either still confront severe forgetting issues or share narrow knowledge between the new and previous tasks. This paper presents a novel Continual Learning solution with Adaptive Neuron Selection (CLANS), which treats the used neurons in earlier tasks as a knowledge pool and makes it scalable via reinforcement learning with a small margin. Subsequently, the adaptive neuron selection enables knowledge consolidation for both old and new tasks in addition to overcoming the CF problem. The experimental results conducted on four datasets widely used in CL evaluations demonstrate that CLANS outperforms the state-of-the-art baselines.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Neuroscience and Cognitive Science (e.g., neural coding, brain-computer interfaces)
Supplementary Material: zip
11 Replies

Loading