LLM-based Translation Inference with Iterative Bilingual Understanding

ACL ARR 2024 December Submission1043 Authors

15 Dec 2024 (modified: 05 Feb 2025)ACL ARR 2024 December SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The remarkable understanding and generation capabilities of large language models (LLMs) have greatly improved the performance of machine translation. However, a poor understanding often leads to the misleading of key information within one input sentence (e.g., concepts and terms), called understanding distortion, thereby degrading the quality of target language translations generated by LLMs. To alleviate this issue, we propose a novel Iterative Bilingual Understanding Translation (IBUT) method to enhance the understanding of sentences. Particularly, IBUT explicitly generates the contextual understanding of source and target sentences explaining key concepts, terms, examples, etc. Thus, IBUT utilizes the dual characteristics of machine translation to generate effective cross-lingual feedback, and thereby iteratively refines contextual understanding to improve the translation performance of LLMs. Experimental results showed that the proposed IBUT significantly outperforms several strong comparison methods on the multiple domain benchmarks (e.g., news, commonsense, and cultural). Source codes will be released.
Paper Type: Long
Research Area: Machine Translation
Research Area Keywords: Machine Translation
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English
Submission Number: 1043
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview