TL;DR: Current sparse autoencoder architectures face fundamental limitations that can be overcome with more powerful encoders
Abstract: A recent line of work has shown promise in using sparse autoencoders (SAEs) to uncover interpretable features in neural network representations. However, the simple linear-nonlinear encoding mechanism in SAEs limits their ability to perform accurate sparse inference. Using compressed sensing theory, we prove that an SAE encoder is inherently insufficient for accurate sparse inference, even in solvable cases. We then decouple encoding and decoding processes to empirically explore conditions where more sophisticated sparse inference methods outperform traditional SAE encoders. Our results reveal substantial performance gains with minimal compute increases in correct inference of sparse codes. We demonstrate this generalises to SAEs applied to large language models, where more expressive encoders achieve greater interpretability. This work opens new avenues for understanding neural network representations and analysing large language model activations.
Lay Summary: Understanding how neural networks work internally is crucial as they're increasingly used in important decisions. Scientists use tools called sparse autoencoders (SAEs) to extract interpretable features from these complex models, but there's a fundamental problem: SAEs use overly simple methods that can't recover the best possible sparse representations.
This paper proves mathematically that SAEs have an inherent limitation: they cannot achieve optimal sparse inference even when it's theoretically possible. The authors show that more sophisticated encoding methods, like multilayer perceptrons, significantly outperform traditional SAEs while using only slightly more computation.
When tested on large language models like GPT-2, these better encoding methods actually produced more interpretable features than simpler approaches, contradicting the common belief that simpler methods are necessary for interpretability. This work opens new possibilities for better understanding how neural networks represent information internally.
Primary Area: Deep Learning->Generative Models and Autoencoders
Keywords: sparse autoencoders, large language models, dictionary learning
Submission Number: 5546
Loading