Reflective Translation: Enhancing Low-Resource Machine Translation through Self-Reflection

Published: 14 Dec 2025, Last Modified: 11 Jan 2026LM4UC@AAAI2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Machine Translation, Self-Reflection, Low-Resource Languages, Prompt Engineering, Large Language Models
TL;DR: Reflective Translation uses LLMs to self-review and correct translations, improving BLEU and COMET scores for low-resource languages like isiZulu and isiXhosa.
Abstract: Low-resource languages such as isiZulu and isiXhosa face persistent challenges in machine translation (MT) due to limited parallel corpora and scarce linguistic resources. Recent work on large language models (LLMs) suggests that self-reflection—the ability of a model to critique and revise its own outputs—has been shown to enhance reasoning and factual consistency. Building on this idea, we present a framework for Reflective Translation, wherein an LLM internally evaluates and corrects its own translations to improve semantic fidelity by employing multi-round prompting. We apply our method using GPT-3.5 and Claude Haiku 3.5 on English–isiZulu and English–isiXhosa pairs from the OPUS-100 and NTREX-African datasets. To assess translation quality, we compute BLEU and COMET scores. We find that Reflective Translation yields consistent improvements in translation quality from the first to the second pass, across both isiZulu (+0.08 BLEU, +0.13 COMET) and isiXhosa (+0.07 BLEU, +0.09 COMET). We further introduce a first-of-its-kind reflection-augmented dataset built from model-generated self-critiques and corrected translations. Overall, this paper demonstrates reflection-based prompting as a promising approach for enhancing data quality and improving MT in under-resourced languages, bridging the gap between LLM reasoning research and practical translation for global linguistic inclusion.
Submission Number: 29
Loading