Abstract: Automatic Post-Editing (APE) systems are prone to over-correction of the Machine Translation (MT) outputs. While a Word-level Quality Estimation (QE) system can provide a way to curtail the over-correction, a significant performance gain has not been observed thus far by utilizing existing APE and QE combination strategies. This paper proposes joint training of a model over QE (sentence- and word-level) and APE tasks to improve the APE. Our proposed approach utilizes a multi-task learning (MTL) methodology, which shows significant improvement while treating the tasks as a 'bargaining game' during training. Moreover, we investigate various existing combination strategies and show that our approach achieves state-of-the-art performance for a 'distant' language pair, viz., English-Marathi. We observe an improvement of 1.09 TER and 1.37 BLEU points over a baseline QE-Unassisted APE system for English-Marathi while also observing 0.46 TER and 0.62 BLEU points improvement for English-German. Further, we discuss the results qualitatively and show how our approach helps reduce over-correction, thereby improving the APE performance. We also observe that the degree of integration between QE and APE directly correlates with the APE performance gain. We release our code publicly.
External IDs:doi:0.18653/v1/2023.findings-emnlp.115
Loading