Abstract: The advent of large reasoning models, such as OpenAI o1 and DeepSeek R1, has significantly advanced complex reasoning tasks. However, their capabilities in multilingual complex reasoning remain underexplored, with existing efforts largely focused on simpler tasks like MGSM. To address this gap, we introduce \textbf{\mmath}, a benchmark for multilingual complex reasoning spanning 374 high-quality math problems across 10 typologically diverse languages. Using \mmath, we observe that even advanced models like DeepSeek R1 exhibit substantial performance disparities across languages and suffer from a critical \textit{off-target} issue—generating responses in unintended languages. To address this, we explore strategies including prompting and training, demonstrating that reasoning in English and answering in target languages can simultaneously enhance performance and preserve target-language consistency. Our findings offer new insights and practical strategies for advancing the multilingual reasoning capabilities of large language models. Our code and data could be found at \url{https://anonymous.4open.science/r/MMATH}.
Paper Type: Long
Research Area: Multilingualism and Cross-Lingual NLP
Research Area Keywords: multilingual benchmarks, mixed language
Contribution Types: Data resources
Languages Studied: English, Chinese, Arabic, Spanish, French, Japanese, Korean, Portuguese, Thai, Vietnamese
Submission Number: 5849
Loading