Track: Long Paper Track (up to 9 pages)
Keywords: Do Multilingual LLMs Think In English?
TL;DR: We show that LLMs may route through English for semantically-loaded words.
Abstract: Large language models (LLMs) have multilingual capabilities and can solve tasks across various languages. However, we show that current LLMs make key decisions in a representation space closest to English, regardless of their input and output languages. Exploring internal representations with a logit lens for sentences in French, German, Dutch, and Mandarin we show that the LLM first emits representations close to English for semantically-loaded words before translating them into the target language. We further show that activation steering works better for these LLMs when the steering vectors are computed in English than in the language of the inputs and outputs. This suggests that multilingual LLMs perform key reasoning steps in a representation that is heavily shaped by English in a way that is not transparent to system users.
Submission Number: 71
Loading