Rethinking Alignment in Cross-Lingual Knowledge Transfer

ICLR 2026 Conference Submission20960 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Cross-Lingual Transfer; Cross-lingual Alignment; Gradient Alignment; In-Context Learning
Abstract: Despite LLMs' advanced performance in multilingual tasks, they usually have performance gap on the same task in dominant languages (e.g., English) and non-dominant languages (e.g., Turkish). In this paper, we analyze LLMs' capacity to transfer the task knowledge learned in the dominant language to the non-dominant language. We first formulate the cross-lingual transfer problem into a gradient alignment problem and then connect it to the representation alignment problem. We show that pre-trained LLMs with decent representation alignment ability can easily transfer knowledge from the dominant language by simple fine-tuning, while others need carefully designed training strategies. For the latter, we propose a cross-lingual in-context prompt tuning (CL-ICP) model to enhance gradient alignment, which utilizes in-context attentions to generalize to unseen data. In addition, we apply a representation shift to enhance representation alignment between demonstration and target samples. Experiments show that CL-ICP improves cross-lingual transfer in both high and low resource scenarios.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 20960
Loading