Aligning Translation-Specific Understanding to General Understanding in Large Language ModelsDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: Although large language models (LLMs) have shown surprising language understanding and generation capabilities, they have yet to gain a revolutionary advancement in the field of machine translation. One potential cause of the limited performance is the misalignment between the translation-specific understanding and general understanding inside LLMs. To align the translation-specific understanding to the general one, we propose a novel translation process \textsc{xIoD} (\textbf{Cross-Lingual} \textbf{I}nterpretation \textbf{o}f \textbf{D}ifficult words), explicitly incorporating the general understanding on the content incurring inconsistent understanding to guide the translation.= Specifically, \textsc{xIoD} performs the cross-lingual interpretation for the difficult-to-translate words and enhances the translation with the generated interpretations. Furthermore, we reframe the external tools of QE to tackle the challenges of \textsc{xIoD} in the detection of difficult words and the generation of helpful interpretations. We conduct experiments on the self-constructed benchmark Challenge-MT, which includes cases in which multiple SOTA translation systems consistently underperform. Experimental results show the effectiveness of our \textsc{xIoD}, which improves up to +3.85 COMET. Human evaluation reveals that the translation generated by \textsc{xIoD} accords more with the sense-for-sense translation.
Paper Type: long
Research Area: Machine Translation
Contribution Types: NLP engineering experiment
Languages Studied: English,Chinese,Estonian,Islandic
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview