Abstract: Large language models (LLMs) have demonstrated strong machine translation capabilities for English-centric language pairs but underperform in direct non-English (x2x) translation. This work addresses this limitation through a synthetic data generation framework that leverages models' established English-to-x (en2x) capabilities. By extending English parallel corpora into omnidirectional datasets and developing an English-referenced quality evaluation proxy, we enable effective collection of high-quality x2x training data. Combined with preference-based optimization, our method achieves significant improvement across 72 x2x directions for widely used LLMs, while generalizing to enhance en2x performance. The results demonstrate that strategic exploitation of English-centric strengths can bootstrap comprehensive multilingual translation capabilities in LLMs.
Paper Type: Long
Research Area: Machine Translation
Research Area Keywords: multilingual MT,scaling,cross-lingual transfer
Contribution Types: Approaches to low-resource settings
Languages Studied: English,German,French,Dutch,Italian,Spanish,Portuguese,Korean,Russian,Chinese
Submission Number: 7368
Loading