Continual Learning with Group-wise Neuron NormalizationDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: continual learning, group-wise neuron normalization, experience replay, subset of network weights competition
Abstract: Continual learning focuses on methods that accommodate the change in distribution and allow model adaptation and evolution while receiving data continuously. Importance and regularization -based weight update methods that rely on heuristics might not be effective. Recently, enhanced experience replay-based methods showed promising results but might add to the computational cost. In this paper, we propose simple parameter-free normalization over groups of distinct neurons at the penultimate layer of the used neural network and a straightforward experience replay algorithm. We argue that such normalization enables the network to balance its capacity for each task, reducing the chances of damaging interference between tasks and mitigating forgetting. Our evaluation shows that normalization over groups of neurons drastically impacts performance. We demonstrate improved retained accuracy and backward transfer with respect to related state-of-the-art methods while computationally efficient.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Supplementary Material: zip
16 Replies

Loading