Unsupervised Progressive Learning and the STAM ArchitectureDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Blind SubmissionReaders: Everyone
Keywords: continual learning, unsupervised learning, representation learning, online learning
Abstract: We first pose the Unsupervised Progressive Learning (UPL) problem: an online representation learning problem in which the learner observes a non-stationary and unlabeled data stream, and identifies a growing number of features that persist over time even though the data is not stored or replayed. To solve the UPL problem we propose the Self-Taught Associative Memory (STAM) architecture. Layered hierarchies of STAM modules learn based on a combination of online clustering, novelty detection, forgetting outliers, and storing only prototypical features rather than specific examples. We evaluate STAM representations using classification and clustering tasks. While there are no existing learning scenarios which are directly comparable to UPL, we compare the STAM architecture with two recent continual learning works; Memory Aware Synapses (MAS), and Gradient Episodic Memories (GEM), which have been modified to be suitable for the UPL setting.
One-sentence Summary: We pose and solve a new online representation learning problem in which the learner observes a non-stationary and unlabeled data stream, and identifies a growing number of features that persist over time without data storage or replay
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Reviewed Version (pdf): https://openreview.net/references/pdf?id=JbrtuxbstdZ
11 Replies

Loading