How do Llamas process multilingual text? A latent exploration through activation patching

Published: 24 Jun 2024, Last Modified: 15 Jul 2024ICML 2024 MI Workshop SpotlightEveryoneRevisionsBibTeXCC BY 4.0
Keywords: patchscope, patching, multilingual
TL;DR: We provide evidence for the existence of language-agnostic concept representations within LLMs.
Abstract: A central question in multilingual language modeling is whether large language models (LLMs) develop a universal concept representation, disentangled from specific languages. In this paper, we address this question by analyzing Llama-2's forward pass during a word translation task. We strategically extract latents from a source translation prompt and insert them into the forward pass on a target translation prompt. By doing so, we find that the output language is encoded in the latent at an earlier layer than the concept to be translated. Building on this insight, we conduct two key experiments. First, we demonstrate that we can change the concept without changing the language and vice versa through activation patching alone. Second, we show that patching with the mean over latents across different language pairs does not impair the model's performance in translating the concept. Our results provide evidence for the existence of language-agnostic concept representations within the model.
Submission Number: 36
Loading