Unsupervised Class-Incremental Learning through ConfusionDownload PDF

28 Sept 2020 (modified: 22 Oct 2023)ICLR 2021 Conference Blind SubmissionReaders: Everyone
Keywords: Incremental Learning, Unsupervised Learning, Continual Learning, Novelty Detection, Out-of-Distribution Detection
Abstract: While many works on Continual Learning have shown promising results for mitigating catastrophic forgetting, they have relied on supervised training. To successfully learn in a label-agnostic incremental setting, a model must distinguish between learned and novel classes to properly include samples for training. We introduce a novelty detection method that leverages network confusion caused by training incoming data as a new class. We found that incorporating a class-imbalance during this detection method substantially enhances performance. The effectiveness of our approach is demonstrated across a set of common image classification benchmarks: MNIST, SVHN, CIFAR-10, and CIFAR-100.
One-sentence Summary: This paper introduces a novel OOD detection method that leverages network confusion to learn in an unsupervised incremental setting.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2104.04450/code)
Reviewed Version (pdf): https://openreview.net/references/pdf?id=ym46ql01PO
9 Replies

Loading