Keywords: Machine Translation, Reward Modeling, Reasoning, Large Language Models
Abstract: While Group Relative Policy Optimization (GRPO) offers a powerful framework for LLM post-training, its effectiveness in open-ended domains like Machine Translation hinges on accurate intra-group ranking. We identify that standard Scalar Quality Metrics (SQM) fall short in this context; by evaluating candidates in isolation, they lack the comparative context necessary to distinguish fine-grained linguistic nuances. To address this, we introduce the Group Quality Metric (GQM) paradigm and instantiate it via the Group Relative Reward Model (GRRM). Unlike traditional independent scorers, GRRM processes the entire candidate group jointly, leveraging comparative analysis to rigorously resolve relative quality and adaptive granularity. Empirical evaluations confirm that GRRM achieves competitive ranking accuracy among all baselines. Building on this foundation, we integrate GRRM into the GRPO training loop to optimize the translation policy. Experimental results demonstrate that our framework not only improves general translation quality but also unlocks reasoning capabilities comparable to state-of-the-art reasoning models.
Paper Type: Long
Research Area: Machine Translation
Research Area Keywords: Machine Translation,Multilingualism and Cross-Lingual NLP
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English,Chinese,Dutch,French,German,Italian,Japanese,Portuguese,Russian,Spanish
Submission Number: 9734
Loading