Submission Type: Short paper (4 pages)
Keywords: mechanistic interpretability, grandmother cells, foundation models
TL;DR: We propose using the neuronal saliency and selectivity for concepts to identify interpretable neurons in foundation models. Applying this to TabPFN, a tabular foundation model, we provide the first evidence of interpretable neurons in such models.
Abstract: Foundation models are powerful yet often opaque in their decision-making. A topic of continued interest in both neuroscience and artificial intelligence is whether some neurons behave like "grandmother cells", i.e., neurons that are inherently interpretable because they exclusively respond to single concepts. In this work, we propose two information-theoretic measures that quantify the neuronal saliency and selectivity for single concepts. We apply these metrics to the representations of TabPFN, a tabular foundation model, and perform a simple search across neuron-concept pairs to find the most salient and selective pair. Our analysis provides the first evidence that some neurons in such models show moderate, statistically significant saliency and selectivity for high-level concepts. These findings suggest that interpretable neurons can emerge naturally and that they can, in some cases, be identified without resorting to more complex interpretability techniques.
Submission Number: 5
Loading