Remedy-R: Generative Reasoning for Machine Translation Evaluation without Error Annotations

ACL ARR 2026 January Submission995 Authors

26 Dec 2025 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Machine Translation, Machine Translation Evaluation, Automatic Evaluation, Quality Estimation
Abstract: Over the years, scalar MT metrics have advanced rapidly on benchmarks. Yet they remain black boxes, offering little insight into their decisions and sometimes degrading under out-of-distribution inputs. We introduce Remedy-R, a reasoning-driven generative MT metric trained with reinforcement learning from pairwise translation preferences, without requiring error-span annotations or distillation from closed LLMs. Unlike scalar MT metrics that only outputs translation quality scores, Remedy-R produces step-by-step analyses of accuracy, fluency, and completeness, enabling more interpretable assessments. With only 60K pairwise training samples across two language pairs, Remedy-R remains competitive with top scalar metrics and GPT-4-based judges on WMT22–24 metric benchmarks, generalizes to other languages, and shows strong robustness on OOD stress tests. Moreover, Remedy-R generates self-reflective feedback that can be reused for translation refinement. We validate the faithfulness of such feedback with GPT-4 and show that a simple evaluate–revise pipeline leveraging Remedy-R’s analyses consistently improves translation quality across diverse models without any task-specific tuning.
Paper Type: Long
Research Area: Machine Translation
Research Area Keywords: machine translation,automatic evaluation
Contribution Types: NLP engineering experiment
Languages Studied: English,German,Chinese,Russian,Japanese,Spanish,Czech,Ukrainian,Hindi
Submission Number: 995
Loading