Large language models (LLMs) have demonstrated impressive multilingual understanding and reasoning capabilities, driven by extensive pre-training multilingual corpora and fine-tuning instruction data. However, a performance gap persists between high-resource and low-resource language tasks due to language imbalance in the pre-training corpus, even using more low-resource data during fine-tuning. To alleviate this issue, we propose LinguaLIFT, a two-stage instruction tuning framework for advancing low-resource language tasks. An additional language alignment layer is first integrated into the LLM to adapt a pre-trained multilingual encoder, thereby enhancing multilingual alignment through code-switched fine-tuning. The second stage fine-tunes LLM with English-only instruction data while freezing the language alignment layer, allowing LLM to transfer task-specific capabilities from English to low-resource language tasks. Additionally, we introduce the Multilingual Math World Problem (MMWP) benchmark, which spans 21 low-resource, 17 medium-resource, and 10 high-resource languages, enabling comprehensive evaluation of multilingual reasoning. Experimental results show that LinguaLIFT outperforms several competitive baselines across MMWP and other widely used benchmarks.
Abstract:
Paper Type: Long
Research Area: Multilingualism and Cross-Lingual NLP
Research Area Keywords: code-switching, mixed language, multilingualism, cross-lingual transfer, multilingual benchmarks, multilingual evaluation, less-resourced languages
Contribution Types: NLP engineering experiment, Approaches to low-resource settings, Data resources
Languages Studied: Afrikaans (af), Arabic (ar), Belarusian (be), Bulgarian (bg), Bengali (bn), Catalan (ca), Czech (cs), Danish (da), German (de), English (en), Spanish (es), Basque (eu), Finnish (fi), French (fr), Gujarati (gu), Hausa (ha), Hindi (hi), Croatian (hr), Hungarian (hu), Armenian (hy), Indonesian (id), Icelandic (is), Italian (it), Japanese (ja), Kannada (kn), Korean (ko), Luxembourgish (lb), Macedonian (mk), Malayalam (ml), Marathi (mr), Norwegian Bokmål (nb), Nepali (ne), Dutch (nl), Polish (pl), Portuguese (pt), Romanian (ro), Russian (ru), Slovak (sk), Slovenian (sl), Serbian (sr), Swedish (sv), Swahili (sw), Tamil (ta), Telugu (te), Thai (th), Ukrainian (uk), Vietnamese (vi), and Chinese (zh).
Submission Number: 1259
Loading