Only for the Unseen Languages, Say the Llamas: On the Efficacy of Language Adapters for Cross-lingual Transfer in English-centric LLMs
Keywords: Language Adapters, Cross-lingual Transfer, PEFT, English-centric LLMs, Language Adaptation
TL;DR: We evaluate cross-lingual transfer in English-centric LLMs using language adapters, finding they help for unseen target languages but offer negligible benefit for seen ones.
Abstract: Most state-of-the-art large language models (LLMs) are trained mainly on English data, limiting their effectiveness on non-English, especially low-resource, languages. This study investigates whether language adapters can facilitate cross-lingual transfer in English-centric LLMs. We train language adapters for 13 languages using Llama 2 (7B) and Llama 3.1 (8B) as base models, and evaluate their effectiveness on two downstream tasks (MLQA and SIB-200) using either task adapters or in-context learning. Our results reveal that language adapters improve performance for languages not seen during pretraining, but provide negligible benefit for seen languages. These findings highlight the limitations of language adapters as a general solution for multilingual adaptation in English-centric LLMs.
Archival Status: Archival
Paper Length: Long Paper (up to 8 pages of content)
Submission Number: 218
Loading