Keywords: machine-translated evaluation benchmarks; translation quality estimation; multilingual evaluation; European languages; LLM performance
Abstract: Machine-translated benchmarks are widely used to assess the multilingual capabilities of large language models (LLMs), yet translation errors in these benchmarks remain underexplored, raising concerns about the reliability and comparability of multilingual evaluation. We address two practical gaps: (i) how well LLM-produced MQM-style error spans match expert human span annotations on real benchmark translations, and (ii) how strongly translation errors (as opposed to source-side issues in the English original) explain accuracy drops on translated benchmarks. We find that span agreement is non-trivial on naturally occurring benchmark translations, and that target-side translation errors are consistently associated with measurable, percentage-point drops in translated accuracy even after controlling for English correctness and source-side anomalies.
Paper Type: Long
Research Area: Machine Translation
Research Area Keywords: automatic evaluation,human evaluation,multilingual MT,benchmarking,automatic creation and evaluation of language resources,NLP datasets,automatic evaluation of datasets,evaluation methodologies,evaluatio
Contribution Types: NLP engineering experiment, Data resources, Data analysis
Languages Studied: Bulgarian, Czech, Danish, Dutch, Estonian, Finnish, French, German, Greek, Hungarian, Italian, Latvian, Lithuanian, Polish, Portuguese, Romanian, Slovakian, Slovenian, Spanish, Swedish, English
Submission Number: 564
Loading