Has Machine Translation Evaluation Achieved Human Parity? The Human Reference and the Limits of Progress

ACL ARR 2025 February Submission4459 Authors

15 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: In Machine Translation (MT) Evaluation, metric performance is assessed based on agreement with human judgments. In recent years, automatic metrics have demonstrated increasingly high levels of agreement with humans. To gain a clearer understanding of metric performance and establish an upper bound, we incorporate human baselines in the MT Meta-Evaluation, that is, the assessment of MT metrics capabilities. Our results show that human annotators are not consistently superior to automatic metrics, with state-of-the-art metrics often ranking on par with or higher than human baselines. Despite these findings suggesting human parity, we discuss several reasons for caution. Finally, we explore the broader implications of our results for the research field, asking: Can we still reliably measure improvements in MT Evaluation? With this work, we aim to shed light on the limits of our ability to measure progress in the field, fostering discussion on an issue that we believe is crucial to the entire MT Evaluation community.
Paper Type: Short
Research Area: Machine Translation
Research Area Keywords: Machine Translation,Resources and Evaluation
Contribution Types: Position papers
Languages Studied: English,German,Chinese,Spanish
Submission Number: 4459
Loading