Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Continual Learning, Incremental Learning, Semi-supervised Learning, Open-world
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: OpenACL exploits open-world unlabeled data to address learning distribution shift and catastrophic forgetting in continual learning.
Abstract: Continual learning (CL), which involves learning from sequential tasks without forgetting, is mainly explored in supervised learning settings where all data are labeled. However, high-quality labeled data may not be readily available at a large scale due to high labeling costs, making the application of existing CL methods in real-world scenarios challenging. In this paper, we delve into a more practical facet of CL: open-world continual learning, where the training data comes from the open-world dataset and is partially labeled and non-i.i.d. Building on the insight that task shifts in continual learning can be viewed as transitions from in-distribution (ID) data to out-of-distribution (OOD) data, we propose OpenACL, a method that explicitly leverages unlabeled OOD data to enhance continual learning. Specifically, OpenACL considers novel classes within OOD data as potential classes for upcoming tasks and mines the underlying pattern in unlabeled open-world data to empower the model's adaptability to upcoming tasks. Furthermore, learning from extensive unlabeled data also helps to tackle the issue of catastrophic forgetting. Extensive experiments validate the effectiveness of OpenACL and show the benefit of learning from open-world data.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2066
Loading