Evaluating Automatic Metrics with Incremental Machine Translation Systems

ACL ARR 2024 June Submission1355 Authors

14 Jun 2024 (modified: 08 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: We introduce a dataset comprising commercial machine translations, gathered weekly over six years across 12 translation directions. Since human A/B testing is commonly used, we assume commercial systems improve over time, which enables us to evaluate machine translation (MT) metrics based on their preference for more recent translations. Our study confirms several previous findings in MT metrics research and demonstrates the dataset's value as a testbed for metric evaluation.
Paper Type: Short
Research Area: Machine Translation
Research Area Keywords: automatic evaluation
Languages Studied: English, Italian, Spanish, German, Mandarin Chinese
Submission Number: 1355
Loading