Revealing the Parallel Multilingual Learning within Large Language Models

ACL ARR 2024 June Submission1966 Authors

15 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) can handle multilingual and cross-lingual text within a single input; however, previous works leveraging multilingualism in LLMs primarily focus on using English as the pivot language to enhance language understanding and reasoning. Given that multiple languages are a compensation for the losses caused by a single language's limitations, it's a natural next step to enrich the model’s learning context through the integration of the original input with its multiple translations. In this paper, we start by revealing that LLMs learn from $\textbf{P}$arallel $\textbf{M}$ultilingual $\textbf{I}$nput ($\textbf{PMI}$). Our comprehensive evaluation shows that PMI enhances the model's comprehension of the input, achieving superior performance than conventional in-context learning (ICL). Furthermore, to explore how multilingual processing affects prediction, we examine the activated neurons in LLMs. Surprisingly, involving more languages in the input activates fewer neurons, leading to more focused and effective neural activation patterns. Also, this neural reaction coincidently mirrors the neuroscience insight about synaptic pruning, highlighting a similarity between artificial and biological `brains'.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: counterfactual/contrastive explanations,knowledge tracing/discovering/inducing,multilingualism
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English,German,Chinese,French,Icelandic,Spanish,Russian
Submission Number: 1966
Loading