Rethinking Crowd-Sourced Evaluation of Neuron Explanations

Published: 30 Sept 2025, Last Modified: 17 Nov 2025Mech Interp Workshop (NeurIPS 2025) PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Automated interpretability, Benchmarking interpretability, Vision transformers
TL;DR: We conduct a large scale crowdsourced comparison of different automated interpretability methods for vision models and develop techniques to make human studies more efficient.
Abstract: Interpreting individual neurons or directions in activation space is important for mechanistic interpretability. Numerous automated interpretability methods have been proposed to generate such explanations, but it remains unclear how reliable these explanations are, and which methods produce the most accurate descriptions. While crowd-sourced evaluations are commonly used, existing pipelines are noisy, costly, and typically assess only the highest-activating inputs, leading to unreliable results. In this paper, we introduce two techniques to enable cost-effective and accurate crowdsourced evaluation of automated interpretability methods beyond top activating inputs. First, we propose Model-Guided Importance Sampling (MG-IS) to select the most informative inputs to show human raters. In our experiments, we show this reduces the number of inputs needed to reach the same evaluation accuracy by $\sim13\times$. Second, we address label noise in crowd-sourced ratings through Bayesian Rating Aggregation (BRAgg), which allows us to reduce the number of ratings per input required to overcome noise by $\sim3\times$. Together, these techniques reduce the evaluation cost by $\sim40\times$, making large-scale evaluation feasible. Finally, we use our methods to conduct a large scale crowd-sourced study comparing recent automated interpretability methods for vision networks.
Submission Number: 245
Loading