Keywords: SAE, ICL, SFC, Interpretability, Gemma, LLM
TL;DR: Explaing ICL using SAE features through SFC.
Abstract: Sparse autoencoders (SAEs) are a popular tool for interpreting large language
model activations, but their utility in addressing open questions in interpretability
remains unclear. In this work, we demonstrate their effectiveness by using SAEs
to deepen our understanding of the mechanism behind in-context learning (ICL).
We identify abstract SAE features that encode the model’s knowledge of which
task to execute and whose latent vectors causally induce the task zero-shot. This
aligns with prior work showing that ICL is mediated by task vectors. We further
demonstrate that these task vectors are well approximated by a sparse sum of SAE
latents, including these task-execution features. To explore the ICL mechanism,
we adapt the sparse feature circuits methodology of Marks et al. (2024) to work for
the much larger Gemma-1 2B model, with 30 times as many parameters, and to
the more complex task of ICL. Through circuit finding, we discover task-detecting
features with corresponding SAE latents that activate earlier in the prompt, that
detect when tasks have been performed. They are causally linked with task-
executing features through attention layer and MLP.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 11395
Loading