Finding Neurons in a Haystack: Case Studies with Sparse Probing

Published: 01 Nov 2023, Last Modified: 01 Nov 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Despite rapid adoption and deployment of large language models (LLMs), the internal computations of these models remain opaque and poorly understood. In this work, we seek to understand how high-level human-interpretable features are represented within the internal neuron activations of LLMs. We train $k$-sparse linear classifiers (probes) on these internal activations to predict the presence of features in the input; by varying the value of $k$ we study the sparsity of learned representations and how this varies with model scale. With $k=1$, we localize individual neurons that are highly relevant for a particular feature and perform a number of case studies to illustrate general properties of LLMs. In particular, we show that early layers make use of sparse combinations of neurons to represent many features in superposition, that middle layers have seemingly dedicated neurons to represent higher-level contextual features, and that increasing scale causes representational sparsity to increase on average, but there are multiple types of scaling dynamics. In all, we probe for over 100 unique features comprising 10 different categories in 7 different models spanning 70 million to 6.9 billion parameters.
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: Thank you all for the helpful comments. We have made the following revisions in response to Reviewer XC3T. * Clarified Transformer equation – adding equation numbers, broke out the overall layer update rule including MHA and layer norm, and made more explicit reference to the nonlinearities. This addresses point 7. * In the related work section, we cite [1] and [2], and discuss the differences between vision models and LLMs, speaking to points 2 and 3 and summarizing our first rebuttal comment. * In section 5.1, we further emphasize some of the appendix results which address point 3. * In section 3.2, we add a pointer to our comparison of methods in B.5, addressing point 9. We believe the remaining concerns were addressed in the rebuttal or by the sections and experiments referenced in the rebuttal. Additionally we made some cosmetic edits for the camera ready version.
Assigned Action Editor: ~Yingnian_Wu1
Submission Number: 1232