Align-then-Slide: A complete evaluation framework for Ultra-Long Document-Level Machine Translation

ACL ARR 2026 January Submission7890 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Document-Level Metric, Document-Level Machine Translation, Align-then-Slide
Abstract: Large language models (LLMs) have ushered in a new era for document-level machine translation ($doc$-mt), yet their whole-document outputs challenge existing evaluation methods that assume sentence-by-sentence alignment. We introduce \textit{\textbf{Align-then-Slide}}, a complete evaluation framework for ultra-long $doc$-mt. In the Align stage, we automatically infer sentence-level source–target correspondences and rebuild the target to match the source sentence number, resolving omissions and many-to-one/one-to-many mappings. In the $n$-Chunk Sliding Evaluate stage, we calculate averaged metric scores under 1-, 2-, 3- and 4-chunk for multi-granularity assessment. On WMT benchmarks our rankings achieve a Pearson correlation of 0.929 with expert MQM scores; on a newly curated real-world test set they again align closely with human judgments. Notably, our method attained SOTA results in all 16 language directions of the segment-level quality-prediction track at WMT2025. When used directly as a reward model for GRPO, it yields translations preferred over a vanilla SFT baseline. These results validate Align-then-Slide as an accurate, robust and actionable evaluation tool for $doc$-mt systems.
Paper Type: Long
Research Area: Machine Translation
Research Area Keywords: Document-Level Metric, Document-Level Machine Translation, Align-then-Slide
Contribution Types: NLP engineering experiment
Languages Studied: English, French, Japanese, Chinese, Czech, Arabic, Estonian, Italian, Russian, Ukrainian
Submission Number: 7890
Loading