Mitigating Catastrophic Forgetting in Target Language Adaptation of LLMs via Source-Shielded Updates

17 Sept 2025 (modified: 04 Dec 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Cross-lingual Transfer, Language Adaptation, Continual Pre-training, Multilinguality, Catastrophic Forgetting, Instruct LLM, LLM
TL;DR: SSU prevents catastrophic forgetting when adapting instruct LLMs to a target language by proactively freezing core parameters before training on unlabeled data, while achieving strong target performance.
Abstract: Expanding the linguistic diversity of instruct large language models (LLMs) is crucial for global accessibility but is often hindered by the reliance on costly specialized target language labeled data and catastrophic forgetting during adaptation. We tackle this challenge under a realistic, low-resource constraint: adapting instruct LLMs using only unlabeled target language data. We introduce **S**ource-**S**hielded **U**pdates (**SSU**), a selective parameter update strategy that proactively preserves source knowledge. Using a small set of source data and a parameter importance scoring method, SSU identifies parameters critical to maintaining source abilities. It then applies a column-wise freezing strategy to protect these parameters before adaptation. Experiments across five typologically diverse languages and 7B and 13B models demonstrate that SSU successfully mitigates catastrophic forgetting. It reduces performance degradation on monolingual source tasks to just 3.4% (7B) and 2.8% (13B) on average, a stark contrast to the 20.2% and 22.3% from full fine-tuning. SSU also achieves target-language performance highly competitive with full fine-tuning, outperforming it on all benchmarks for 7B models and the majority for 13B models.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 9371
Loading