MT-RewardTree: A Comprehensive Framework for Advancing LLM-Based Machine Translation via Reward Modeling

ACL ARR 2025 February Submission6471 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Process reward models (PRMs) have shown success in complex reasoning tasks for large language models (LLMs). However, their application to machine translation (MT) remains underexplored due to the lack of systematic methodologies and evaluation benchmarks. To address this gap, we introduce MT-RewardTree, a comprehensive framework for constructing, evaluating, and deploying process reward models in MT. We propose a novel method for automatically generating high-quality token-level preference pairs using approximate Monte Carlo Tree Search (MCTS), mitigating the prohibitive cost of human annotation. Our framework establishes the first MT-specific reward model benchmark and provides a systematic comparison of different reward modeling architectures, revealing that token-level supervision effectively captures fine-grained preferences. Experimental results demonstrate that our MT-PRM-Qwen-2.5-3B achieves state-of-the-art performance in both token-level and sequence-level evaluation given the same input prefix. Furthermore, we showcase practical applications where PRMs enable test-time alignment for LLMs without additional training and significantly improve performance in hypothesis ensembling. Our work provides valuable insights into the role of reward models in MT research. Our code and data will be publicly available.
Paper Type: Long
Research Area: Machine Translation
Research Area Keywords: automatic evaluation, efficient inference for MT, modeling
Languages Studied: English, German, Chinese, Russian
Submission Number: 6471
Loading