Differentiable Hebbian Plasticity for Continual LearningDownload PDF

Anonymous

16 May 2019 (modified: 05 May 2023)AMTL 2019Readers: Everyone
Keywords: continual learning, catastrophic forgetting, Hebbian learning, synaptic plasticity, neural networks
TL;DR: Hebbian plastic weights can behave as a compressed episodic memory storage in neural networks; improving their ability to alleviate catastrophic forgetting in continual learning.
Abstract: Catastrophic forgetting poses a grand challenge for continual learning systems, which prevents neural networks from protecting old knowledge while learning new tasks sequentially. We propose a Differentiable Hebbian Plasticity (DHP) Softmax layer which adds a fast learning plastic component to the slow weights of the softmax output layer. The DHP Softmax behaves as a compressed episodic memory that reactivates existing memory traces, while creating new ones. We demonstrate the flexibility of our model by combining it with existing well-known consolidation methods to prevent catastrophic forgetting. We evaluate our approach on the Permuted MNIST and Split MNIST benchmarks, and introduce Imbalanced Permuted MNIST — a dataset that combines the challenges of class imbalance and concept drift. Our model requires no additional hyperparameters and outperforms comparable baselines by reducing forgetting.
0 Replies

Loading