TasTe: Teaching Large Language Models to Translate through Self-ReflectionDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
TL;DR: Teaching large language models to translate through self-reflection process to enhance their translation performance.
Abstract: Large language models (LLMs) have exhibited remarkable performance in various natural language processing tasks. Techniques like instruction tuning have effectively enhanced the proficiency of LLMs in the downstream task of machine translation. However, the existing approaches fail to yield satisfactory translation outputs that match the quality of supervised neural machine translation (NMT) systems. One plausible explanation for this discrepancy is that the straightforward prompts employed in these methodologies are not able to fully leverage the acquired instruction-following capabilities. To this end, we propose the $\textbf{TasTe}$ framework, which stands for Translating through Self-Reflection. The self-reflection process includes two stages of inference. In the first stage, LLMs are instructed to generate preliminary translations and conduct self-assessments on these translations simultaneously. In the subsequent stage, LLMs are tasked to refine these preliminary translations according to the assessment results. The evaluation results across four language directions on both WMT22 and FLORES-200 benchmarks reveal the effectiveness of our approach when compared to existing methods. Our work presents a promising approach to unleash the potential of LLMs and enhance their capabilities in machine translation.
Paper Type: long
Research Area: Machine Translation
Contribution Types: NLP engineering experiment
Languages Studied: Chinese,English,German
0 Replies

Loading