Topological Continual Learning with Wasserstein Distance and BarycenterDownload PDF

Published: 21 Oct 2022, Last Modified: 26 Mar 2024NeurIPS 2022 Workshop MetaLearn PosterReaders: Everyone
Keywords: Topological data analysis, continual learning
TL;DR: This paper proposes a novel brain-inspired method for continual learning from a topological perspective.
Abstract: Continual learning in neural networks suffers from a phenomenon called catastrophic forgetting, in which a network quickly forgets what was learned in a previous task. The human brain, however, is able to continually learn new tasks and accumulate knowledge throughout life. Neuroscience findings suggest that continual learning success in the human brain is potentially associated with its modular structure and memory consolidation mechanisms. In this paper we propose a novel topological regularization that penalizes cycle structure in a neural network during training using principled theory from persistent homology and optimal transport. The penalty encourages the network to learn modular structure during training. The penalization is based on the closed-form expressions of the Wasserstein distance and barycenter for the topological features of a 1-skeleton representation for the network. Our topological continual learning method combines the proposed regularization with a tiny episodic memory to mitigate forgetting. We demonstrate that our method is effective in both shallow and deep network architectures for multiple image classification datasets.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2210.02661/code)
0 Replies

Loading