Beyond English-Centric Machine Translation by Multilingual Instruction Tuning Large Language Models

ACL ARR 2024 April Submission636 Authors

16 Apr 2024 (modified: 19 May 2024)ACL ARR 2024 April SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) have demonstrated remarkable performance on Machine Translation (MT) among various natural languages. However, many LLMs are English-dominant and only support some high-resource languages, they will fail on the non-English-Centric translation task. In this work, we propose a Multilingual Instruction Tuning (MIT) method to improve the LLMs on non-English-Centric translation. We design a multilingual instruction method which leverage the English sentence as reference to help LLMs understand the source sentence. In order to solve the problem of difficulty in obtaining multilingual parallel corpora of low-resource languages, we train a to-English LLM to generate English reference so that our MIT method only needs bilingual data. We experiment on BLOOM and LLaMA2 foundations and extensive experiments show that MIT outperforms the baselines and some large-scale language models like ChatGPT and Google Translate. We further demonstrate the importance of English reference in both training and inference processes.
Paper Type: Long
Research Area: Machine Translation
Research Area Keywords: Machine Translation, Large Language Model, Instruction Tuning
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English, Chinese, French, German, Spanish, Indonesian, Romanian, Russian, Japanese, Thai
Submission Number: 636
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview