Do We Really Need Parameter-Isolation to Protect Task Knowledge?

26 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: continual learning
Abstract: Due to the dynamic nature of tasks, how deep networks can transition from a static structure, trained on previous tasks, to a dynamic structure that adapts to continuously changing data inputs has garnered significant attention. This involves learning new task knowledge while avoiding catastrophic forgetting of previously acquired knowledge. Continual learning is a learning approach aimed at addressing the problem of catastrophic forgetting, primarily by constraining or isolating parameter changes to protect the knowledge of prior tasks. However, while existing methods offer good protection for old task knowledge, they often diminish the ability to learn new task knowledge. Given the sparsity of activation channels in a deep network, we introduce a novel misaligned fusion method within the context of continual learning. This approach allows for the adaptive allocation of available pathways to protect crucial knowledge from previous tasks, replacing traditional isolation techniques. Furthermore, when new tasks are introduced, the network can undergo full parameter training, enabling a more comprehensive learning of new tasks. This work conducts comparative tests of our method against other approaches using deep network architectures of various sizes and popular benchmark datasets. The performance demonstrates the effectiveness and superiority of our method.
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7383
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview