Keywords: Multilingualism, Cultural Adaptability, Reasoning, Large Language Model
Abstract: Languages encode distinct abstractions and inductive priors, yet most large language models (LLMs) overlook this diversity by reasoning in a single dominant language. In this work, we introduce x1, a family of reasoning models that can adaptively reason in an advantageous language on a per-instance basis. To isolate the effect of reasoning-language choice, x1 is constructed without expanding the model's knowledge boundaries and is trained by contrasting linguistically distinct reasoning trajectories for the same input. Our extensive experiments demonstrate the benefits of adaptive multilingual reasoning across multilingual mathematical reasoning and culturally grounded tasks. Moreover, our results challenge a simplistic view of scaling laws: while scaling reduces cross-lingual disparities in procedural domains such as math reasoning, it does not eliminate the advantages of culture-associated languages in culturally grounded tasks, as we empirically show that such reasoning enables more efficient and accurate cultural knowledge recall. Overall, our findings establish language choice as a functional component of reasoning, with implications for building more generalist and globally competent reasoning models.
Paper Type: Long
Research Area: Multilinguality and Language Diversity
Research Area Keywords: Multilingualism and Cross-Lingual NLP
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: Arabic, Bengali, Chinese, Danish, Dutch, English, Finnish, French, German, Greek, Hindi, Indonesian, Irish, Italian, Japanese, Korean, Malay, Maori, Norwegian, Polish, Portuguese, Russian, Scottish Gaelic, Spanish, Swahili, Swedish, Tagalog, Thai
Submission Number: 2671
Loading