Keywords: multilingual, reasoning, large language model, post-training
TL;DR: We identify Cross-lingual Collapse, a systematic drift in which the chain-of-thought (CoT) of a multilingual language model reverts to its dominant pre-training language even when the prompt is expressed in a different language.
Abstract: Reinforcement learning with verifiable reward (RLVR) has been instrumental in eliciting strong reasoning capabilities from large language models (LLMs) via long chains of thought (CoT). During RLVR training, we identify an empirical phenomenon—a systematic drift whereby a multilingual model’s CoT reverts to its dominant pre-training language (e.g., English) even when prompted in another language—which we term Cross-lingual Collapse. Because the long-CoT regime magnifies exposure to linguistic priors, the underlying trade-off between maximizing reasoning depth and preserving target-language fidelity has remained under-characterized. To examine this trade-off, we train LLMs with Group-Relative Policy Optimization (GRPO) on translated versions of math datasets widely used to elicit long-CoT reasoning. Throughout training, we track both task accuracy and the language consistency of reasoning chains. Our experiments yield three findings: (i) under RLVR, CoT in LLMs systematically drifts toward the pre-training dominant language as reasoning performance rises; (ii) English-centric priors, long-CoT GRPO optimization, task difficulty, and high-entropy decoding jointly amplify this drift, and the pattern persists beyond mathematics; and (iii) interventions that favor target-language traces—via a language-consistency reward, decoding-time controls, or more balanced backbones—mitigate collapse but reveal a persistent performance–fidelity trade-off.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 17094
Loading