Simple and Effective Input Reformulations for Translation

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 MainEveryoneRevisionsBibTeX
Submission Type: Regular Long Paper
Submission Track: Efficient Methods for NLP
Submission Track 2: Machine Translation
Keywords: natural language processing, data efficiency, input reformulations, multilingual, translation, foundation language models, finetuning, machine learning
Abstract: Foundation language models learn from their finetuning input context in different ways. In this paper, we reformulate inputs during finetuning for challenging translation tasks, leveraging model strengths from pretraining in novel ways to improve downstream performance. These reformulations are simple data level modifications, require no additional collection of training data or modification of data at inference time. They can be applied either on single language pair translation tasks or massively multilingual translation tasks. Experiments with these techniques demonstrate significant performance improvements up to \textbf{3.5 chrF++ on the Flores200 translation benchmark}. We hope our research accessibly improves finetuning data efficiency, enabling more effective training to scalably improve state-of-the-art performance. Our code is released \href{https://github.com/bri25yu/LanguageModelExperimentation}{here}.
Submission Number: 2114
Loading