Reducing Translationese via Iterative Translation Refinement with Large Language ModelsDownload PDF

Anonymous

16 Oct 2023ACL ARR 2023 October Blind SubmissionReaders: Everyone
Abstract: Translations created by machines or humans can suffer from translationese—an awkward or unnatural output due to the translation process. We argue that the advent of large language models offers a means to mitigate translationese via iterative refinement, which is infeasible for conventional encoder-decoder models. Our experiments show that refinement reduces string-based metric scores, but neural metrics suggest comparable or improved quality. Human evaluations demonstrate that translationese is lessened compared to initial translations and even human references, while maintaining quality. Ablation studies underscore the importance of anchoring the refinement to the source and a reasonable seed translation. We also discuss current challenges in measuring translationese.
Paper Type: short
Research Area: Machine Translation
Contribution Types: NLP engineering experiment, Position papers
Languages Studied: English, German, Chinese, French, Japanese, Ukrainian, Czech
Consent To Share Submission Details: On behalf of all authors, we agree to the terms above to share our submission details.
0 Replies

Loading