RECALL: REpresentation-aligned Catastrophic-forgetting ALLeviation via Hierarchical Model Merging

ACL ARR 2025 May Submission7639 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: We unveil that internal representations in large language models (LLMs) serve as reliable proxies of learned knowledge, and propose **RECALL**, a novel representation-aware model merging framework for continual learning without access to historical data. RECALL computes inter-model similarity from layer-wise hidden representations over clustered typical samples, and performs adaptive, hierarchical parameter fusion to align knowledge across models. This design enables the preservation of domain-general features in shallow layers while allowing task-specific adaptation in deeper layers. Unlike prior methods that require task labels or incur performance trade-offs, RECALL achieves seamless multi-domain integration and strong resistance to catastrophic forgetting. Extensive experiments across five NLP tasks and multiple continual learning scenarios show that RECALL outperforms baselines in both knowledge retention and generalization, providing a scalable and data-free solution for evolving LLMs.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: Continual Learning, Catastrophic Forgetting, Transfer Learning, Model Merging, Knowledge Fusion
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 7639
Loading