Shared Knowledge Lifelong LearningDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Abstract: In Lifelong Learning (LL), agents continually learn as they encounter new conditions and tasks. Most current LL is limited to a single agent that learns tasks sequentially. Dedicated LL machinery is then deployed to mitigate the forgetting of old tasks as new tasks are learned. This is inherently slow. We propose a new Shared Knowledge Lifelong Learning (SKILL) learning paradigm, which deploys a population of LL agents that each learn different tasks independently and in parallel. After learning their respective tasks, agents share and consolidate their knowledge over a communication network, so that, in the end, all agents can master all tasks. Our approach relies on a frozen backbone embedded in all agents at manufacturing time, so that only the last layer head plus some small adjustments to the backbone beneficial biases are learned for each task. To eliminate the need for a task oracle, agents also learn and share summary statistics about their training datasets (Gaussian Mixture Clusters), or share a few training images, to help other agents assign test samples to the correct head using a Mahalanobis task mapper. On a new, very challenging dataset with 102 image classification tasks, we achieve significant speedup over 18 LL baselines (e.g., >9,000x speedup over single-agent EWC) while also achieving higher (and SOTA) accuracy.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
5 Replies

Loading