Reasoning for Translation: Comparative Analysis of Chain-of-Thought and Tree-of-Thought Prompting for LLM Translation

Published: 22 Jun 2025, Last Modified: 22 Jun 2025ACL-SRW 2025 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: machine translation, large language models, in-context learning, chain-of-thought, tree-of-thought, prompting
Abstract: As Large Language Models (LLMs) continue to advance in capability, prompt engineering has emerged as a crucial method for optimizing their performance on specialized tasks. While prompting strategies like Zero-shot, Few-shot, Chain-of-Thought, and Tree-of-Thought have demonstrated significant improvements in reasoning tasks, their application to machine translation has received comparatively less attention. This paper systematically evaluates these prompting techniques across diverse language pairs and domains, measuring their effect on translation quality. Our findings reveal substantial performance variations between prompting methods, with certain strategies offering consistent improvements for specific language directions and complexity levels. These results provide valuable insights for developing more effective LLM-based translation systems without requiring model fine-tuning and complement existing works in the field.
Archival Status: Archival
Paper Length: Long Paper (up to 8 pages of content)
Submission Number: 63
Loading