Storing Encoded Episodes as Concepts for Continual LearningDownload PDF

Jun 12, 2020 (edited Jul 13, 2020)ICML 2020 Workshop LifelongML Blind SubmissionReaders: Everyone
  • Student First Author: Yes
  • Previously Published: Not published before. Extend version under review at NeurIPS 2020
  • TL;DR: We train autoencoders with Neural Style Transfer to replay old tasks data for continual learning. The encoded features are converted into centroids and covariances to keep memory footprint from growing while keeping classifier performance stable.
  • Keywords: Continual learning, Cognitively-inspired learning, Class-incremental learning, Catastrophic forgetting
  • Abstract: The two main challenges faced by continual learning approaches are catastrophic forgetting and memory limitations on the storage of data. To cope with these challenges, we propose a novel, cognitively-inspired approach which trains autoencoders with Neural Style Transfer to encode and store images. Reconstructed images from encoded episodes are replayed when training the classifier model on a new task to avoid catastrophic forgetting. The loss function for the reconstructed images is weighted to reduce its effect during classifier training to cope with image degradation. When the system runs out of memory the encoded episodes are converted into centroids and covariance matrices, which are used to generate pseudo-images during classifier training, keeping classifier performance stable with less memory. Our approach increases classification accuracy by 13-17% over state-of-the-art methods on benchmark datasets, while requiring 78% less storage space.
0 Replies