Keywords: Machine Learning, Math evaluation, multilingual
TL;DR: This paper debunks a perceived multilingual math performance gap in LLMs, showing it stems from benchmark translation errors and flawed answer extraction rather than model limitations.
Abstract: Most current large language models(LLMs) support a wide variety of languages in addition to English, including high-resource languages (e.g. German, Chinese, French), as well as low-resource ones (e.g. Swahili, Telugu).
In addition they have also shown impressive capabilities in different domains, like coding, science and math.
In this short paper, taking math as an example domain, we study the performance of different LLMs across languages.
Experimental results show that there exists a non-negligible and consistent gap in the performance of the models across languages.
Interestingly, and somewhat against expectations, the gap exists for both high- and low-resource languages.
We hope that these results influence further research into cross-lingual capability generalization for next generation LLMs.
If it weren't for the fact that they are false!
By analyzing one of the standard multilingual math benchmarks(MGSM), we determine that several translation errors are present in the data.
Furthermore, the lack of standardized answer extraction from LLM outputs further influences the final results.
We propose a method for automatic quality assurance to address the first issue at scale, and give recommendations to address the second one.
Combining these two approaches we show that the aforementioned language gap mostly disappears, leading to completely different conclusions from our research.
We additionally release the corrected dataset to the community.
Primary Area: datasets and benchmarks
Submission Number: 18764
Loading