Diversity-aware Continual Learning with Latent Knowledge Hypergraph

22 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: continual learning, hypernetwork, hypergraph, diversity awareness
Abstract: Continual learning (CL) refers to the ability of models to learn from non-stationary data distribution while transferring and protecting past knowledge. Existing literature in CL has mainly focused on overcoming catastrophic forgetting. However, they often overlook a critical trade-off between parameter efficiency and capacity saturation. Almost all of the existing approaches including architecture-stable and architecture-growing methods struggle to balance parameter efficiency and capacity saturation. This makes them vulnerable to long-term task-incremental CL under storage constraints. In this paper, we propose a novel CL approach that addresses the trade-off between parameter efficiency and capacity saturation by dynamically expanding the model's weight space in proportion to the actual capacity increase needed by each new task. Specifically, our approach introduces a unique knowledge hypergraph structure that captures the latent knowledge across tasks and leverages it to measure task diversity and estimate the capacity increase required for each new task. Moreover, we introduce new constraints to ensure parameter efficiency during inference and a fine-grained parameter generator to create task-specific sub-networks that ensure a constant number of trainable parameters over time while accommodating the evolving complexities of tasks. Extensive experiment results show that the proposed approach achieves state-of-the-art results on several benchmark CL datasets, while maintaining low parameter counts.
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6070
Loading