Release Opt Out: No, I don't wish to opt out of paper release. My paper should be released.
Keywords: interpretability, explainability, disentanglement, LLM, sparse autoencoders
Abstract: Sparse Autoencoders (SAEs) are an interpretability technique aimed at decomposing neural network activations into interpretable units. However, a major bottleneck for SAE development has been the lack of high-quality performance metrics, with prior work largely relying on unsupervised proxies. In this work, we introduce a family of evaluations based on SHIFT, a downstream task from Marks et al. that measures an SAE's ability to disentangle and reduce spurious correlations. To create an automated evaluation, we extend SHIFT by replacing human judgment with LLMs. Additionally, we introduce the Targeted Probe Perturbation (TPP) metric that quantifies an SAE's ability to disentangle similar concepts, effectively scaling SHIFT to a wider range of datasets. We apply both SHIFT and TPP to multiple open-source models, demonstrating that these metrics effectively differentiate between various SAE training hyperparameters and architectures.
Submission Number: 48
Loading