Abstract: In this study, we reveal an in-context learning (ICL) capability of multilingual large language models (LLMs): by translating the input to several languages, we provide $\textbf{P}$arallel $\textbf{I}$nput in $\textbf{M}$ultiple Languages ($\textbf{PIM}$) to LLMs, which significantly enhances their comprehension abilities. To test this capability, we design extensive experiments encompassing 8 typical datasets, 7 languages and 8 state-of-the-art multilingual LLMs. Experimental results show that (1) incorporating more languages help $\textbf{PIM}$ surpass the conventional ICL further; (2) even combining with the translations that are inferior to baseline performance can also help. Moreover, by examining the activated neurons in LLMs, we discover a counterintuitive but interesting phenomenon. Contrary to the common thought that \textsc{PIM} would activate more neurons than monolingual input to leverage knowledge learned from diverse languages, $\textbf{PIM}$ actually inhibits neurons and promotes more precise neuron activation especially when more languages are added. This phenomenon aligns with the neuroscience insight about synaptic pruning, which removes less used neural connections, strengthens remainders, and then enhances brain intelligence.
Paper Type: long
Research Area: Multilinguality and Language Diversity
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English,German,Chinese,French,Icelandic,Spanish,Russian
0 Replies
Loading