Abstract: This paper explores the challenges of test-time scaling of large language models (LLMs), regarding both the data and inference efficiency. We highlight the diversity of multi-lingual reasoning based on our pilot studies, and then introduce a novel approach, $(L^2)$ multi-lingual unification learning with a decoding intervention strategy for further investigation.
The basic idea of $(L^2)$ is that the reasoning process varies across different languages, which may be mutually beneficial to enhance both model performance and efficiency.
In specific, there are two types of multi-lingual data: the entire long chain-of-thought annotations in different languages and the step-wise mixture of languages.
By further tuning based on them, we show that even small amounts of data can significantly improve reasoning capabilities. Our findings suggest that multilingual learning reduces both the required data and the number of inference tokens while maintaining a comparable performance. Furthermore, $(L^2)$ is orthogonal to other data efficient methods. Thus, we also emphasize the importance of diverse data selection. The $(L^2)$ method offers a promising solution to the challenges of data collection and test-time compute efficiency in LLMs.
Paper Type: Long
Research Area: Efficient/Low-Resource Methods for NLP
Research Area Keywords: Language Modeling; Multilingualism and Cross-Lingual NLP;
Contribution Types: Approaches to low-resource settings, Approaches low compute settings-efficiency, Data resources
Languages Studied: Chinese;Japanese;Korean;English;French;German;Russian;Arabic;Hebrew
Submission Number: 7478
Loading