Transfer Incremental Learning using Data AugmentationDownload PDF

09 Feb 2018 (modified: 05 May 2023)ICLR 2018 Workshop SubmissionReaders: Everyone
Abstract: Due to catastrophic forgetting, deep learning remains highly inappropriate when facing incremental learning of new classes and examples over time. In this contribution, we introduce Transfer Incremental Learning using Data Augmentation (TILDA). TILDA combines transfer learning from a pre-trained Deep Neural Network (DNN) as feature extractor, a Nearest Class Mean (NCM) inspired classifier and majority vote using data augmentation on both training and test vectors. The obtained methodology allows learning new examples or classes on the fly with very limited computational and memory footprints. We perform experiments on challenging vision datasets and obtain performance significantly better than existing incremental counterparts.
TL;DR: Combining NCM-inspired classifiers with quantized outputs of a Deep CNN enables lightweight one shot incremental learning.
Keywords: Computer vision, Deep learning, Supervised Learning, Transfer Learning
4 Replies

Loading