Recovered in Translation: Efficient Pipeline for Automated Translation of Benchmarks and Datasets

ACL ARR 2026 January Submission10754 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: multilinguality, machine translation, multilingual benchmarks, large language models
Abstract: The reliability of multilingual Large Language Model (LLM) evaluation is currently compromised by the inconsistent quality of translated benchmarks. Existing resources often suffer from semantic drift and context loss, which can lead to misleading performance metrics. In this work, we present a fully automated framework designed to address these challenges by enabling scalable, high-quality translation of datasets and benchmarks. We demonstrate that adapting test-time compute scaling strategies, specifically Universal Self-Improvement (USI) and our proposed Translation Ranking (T-RANK), allows for significantly higher quality outputs compared to traditional pipelines. By effectively applying these methods, our framework ensures that benchmarks preserve their original task structure and linguistic nuances during localization. We apply this approach to translate popular benchmarks and datasets into eight Eastern and Southern European languages. Evaluations using both reference-based metrics and LLM-as-a-judge show that our translations surpass existing resources, resulting in more accurate downstream model assessment. We release both the framework and the improved benchmarks to facilitate robust and reproducible multilingual AI development.
Paper Type: Long
Research Area: Multilinguality and Language Diversity
Research Area Keywords: multilingualism,multilingual benchmarks,multilingual evaluation,less-resourced languages
Contribution Types: NLP engineering experiment, Approaches low compute settings-efficiency, Data resources, Data analysis
Languages Studied: Ukrainian,Slovak,Romanian,Estonian,Lithuanian,Bulgarian,Turkish,Greek
Submission Number: 10754
Loading