DUAL-REFLECT: Enhancing Large Language Models for Reflective Translation through Dual Learning Feedback MechanismsDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: Recently, large language models (LLMs) enhanced by self-reflection have achieved promising performance on machine translation. The key idea is guiding LLMs to generate translation with human-like feedback. However, existing self-reflection methods lack effective feedback information, limiting the translation performance. To address this, we introduce a DUAL-REFLECT framework, leveraging the dual learning of translation tasks to provide effective feedback, thereby enhancing the models' self-reflective abilities and improving translation performance. The application of this method across various translation tasks has proven its effectiveness in improving translation accuracy and eliminating ambiguities, especially in translation tasks with low-resource language pairs.
Paper Type: short
Research Area: Machine Translation
Contribution Types: NLP engineering experiment
Languages Studied: English
Preprint Status: We are considering releasing a non-anonymous preprint in the next two months (i.e., during the reviewing process).
A1: yes
A1 Elaboration For Yes Or No: 7
A2: no
A2 Elaboration For Yes Or No: This research focuses on issues within the machine translation domain, with experiments revealing no risk outputs. The choice of models is based on those trained via RLHF, aiming for theoretical alignment with human feedback.
A3: yes
A3 Elaboration For Yes Or No: 1
B: no
B1: yes
B1 Elaboration For Yes Or No: 3
B2: yes
B2 Elaboration For Yes Or No: 8
B3: n/a
B4: n/a
B5: n/a
B6: yes
B6 Elaboration For Yes Or No: 3
C: yes
C1: n/a
C2: yes
C2 Elaboration For Yes Or No: 3
C3: yes
C3 Elaboration For Yes Or No: 3
C4: yes
C4 Elaboration For Yes Or No: 3
D: yes
D1: n/a
D2: n/a
D3: n/a
D4: n/a
D5: n/a
E: no
E1: n/a
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview