TL;DR: Explaing In Context Learning by finding circuits in terms of sparse autoencoder latents, on much larger models than have been studied before
Abstract: Sparse autoencoders (SAEs) are a popular tool for interpreting large language model activations, but their utility in addressing open questions in interpretability remains unclear. In this work, we demonstrate their effectiveness by using SAEs
to deepen our understanding of the mechanism behind in-context learning (ICL). We identify abstract SAE features that (i) encode the model’s knowledge of which task to execute and (ii) whose latent vectors causally induce the task zero-shot.
This aligns with prior work showing that ICL is mediated by task vectors. We further demonstrate that these task vectors are well approximated by a sparse sum of SAE latents, including these task-execution features. To explore the ICL mechanism, we scale the sparse feature circuits methodology of Marks et al. (2024) to the Gemma 1 2B model for the more complex task of ICL. Through circuit finding, we discover task-detecting features with corresponding SAE latents that activate earlier in the prompt, that detect when tasks have been performed. They are causally linked with task-execution features through the attention and MLP sublayers.
Lay Summary: Large language models like ChatGPT can learn new tasks just from seeing a few examples in their input, without any additional training. For instance, if you show them "hot → cold, big → small" and then ask "fast →", they'll correctly respond "slow." This ability, called in-context learning, is remarkable but poorly understood. We don't know how these models recognize what task they're supposed to perform or how they execute it internally.
We used a recently developed tool called sparse autoencoders (SAEs) to peer inside AI models and map out exactly how in-context learning works. SAEs help scientists identify meaningful patterns in the complex neural activity of AI systems. Using SAEs on Google's Gemma model, we discovered two key types of neural patterns working together: some that detect what task is being demonstrated (like recognizing antonym examples) and others that execute the task (like generating opposite words). We also developed new methods, including an algorithm called Task Vector Cleaning, to isolate these important patterns and trace how information flows between them.
This work demonstrates that SAEs can successfully reveal the mechanisms behind complex AI behaviors, not just simple ones. Understanding how AI models process examples and learn tasks is crucial for making them safer and more reliable. Our investigation of in-context learning provides a foundation for better interpreting AI behavior, detecting potential failures, and designing more robust systems. As AI becomes more powerful and widespread, having these kinds of analysis tools becomes increasingly important for ensuring these systems work as intended.
Primary Area: Deep Learning->Large Language Models
Keywords: SAE, ICL, SFC, Interpretability, Gemma, LLM, Mechanistic Interpretability, Sparse Autoencoders, Circuits
Submission Number: 7788
Loading