Measuring Catastrophic Forgetting in Cross-Lingual Classification: Transfer Paradigms and Tuning Strategies
Abstract: Cross-lingual transfer leverages knowledge from a resource-rich source language, commonly English, to enhance performance in less-resourced target languages. Two widely used strategies are: Cross-Lingual Validation (CLV), which involves training on the source language and validating on the target language, and Intermediate Training (IT), where models are first fine-tuned on the source language and then further trained on the target language. While both strategies have been studied, their effects on encoder-based models for classification tasks remain underexplored. In this paper, we systematically compare these strategies across six multilingual classification tasks, evaluating downstream performance, catastrophic forgetting, and both zero-shot and full-shot scenarios. Additionally, we contrast parameter-efficient adapter methods with full-parameter fine-tuning. Our results show that IT generally performs better in the target language, whereas CLV more effectively preserves source-language knowledge across multiple cross-lingual transfers. These findings underscore the trade-offs between optimizing target performance and mitigating catastrophic forgetting.
Loading