Abstract: Large Language Models (LLMs) have demonstrated remarkable performance on Machine Translation (MT) among various natural languages. However, many LLMs are English-dominant and only support some high-resource languages, they will fail on the non-English-Centric translation task. In this work, we propose a Multilingual Instruction Tuning (MIT) method to improve the LLMs on non-English-Centric translation. We design a multilingual instruction method which leverage the English sentence as reference to help LLMs understand the source sentence. In order to solve the problem of difficulty in obtaining multilingual parallel corpora of low-resource languages, we train a to-English LLM to generate English reference so that our MIT method only needs bilingual data. We experiment on BLOOM and LLaMA2 foundations and extensive experiments show that MIT outperforms the baselines and some large-scale language models like ChatGPT and Google Translate. We further demonstrate the importance of English reference in both training and inference processes.
Paper Type: Long
Research Area: Machine Translation
Research Area Keywords: Machine Translation, Large Language Model, Instruction Tuning
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English, Chinese, French, German, Spanish, Indonesian, Romanian, Russian, Japanese, Thai
Submission Number: 636
Loading