Keywords: multilingual, consistency, cross-lingual transfer, knowledge sharing
TL;DR: We show that larger LLMs process multilingual queries differently than smaller LLMs, and that shared latent semantic space facilitates cross-lingual transfer and consistency.
Abstract: Large language models (LLMs) are demonstrably capable of cross-lingual transfer, but can produce inconsistent output when prompted with the same queries written in different languages. To understand how language models are able to generalize knowledge from one language to the others, we measure representation similarity across languages by centered kernel alignment (CKA) and cosine similarity. We also apply the logit lens to interpret the implicit steps taken by LLMs to solve multilingual multi-choice reasoning questions. We find LLMs predict inconsistently and are less accurate because they rely on representations of individual languages, rather than working in a shared semantic space. While larger models are more multilingual, we show their hidden states are more likely to dissociate from the shared representation compared to smaller models, but are nevertheless more capable of retrieving knowledge embedded across different languages. Finally, we demonstrate that knowledge sharing in small models can be facilitated by steering their latent processing towards the shared semantic space. This improves the models’ multilingual reasoning performance, as a result of more knowledge transfer from, and better output consistency with English.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 17455
Loading