Sparse Autoencoders, Again?

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Is there really much more to say about sparse autoencoders (SAEs)? Autoencoders in general, and SAEs in particular, represent deep architectures that are capable of modeling low-dimensional latent structure in data. Such structure could reflect, among other things, correlation patterns in large language model activations, or complex natural image manifolds. And yet despite the wide-ranging applicability, there have been relatively few changes to SAEs beyond the original recipe from decades ago, namely, standard deep encoder/decoder layers trained with a classical/deterministic sparse regularizer applied within the latent space. One possible exception is the variational autoencoder (VAE), which adopts a stochastic encoder module capable of producing sparse representations when applied to manifold data. In this work we formalize underappreciated weaknesses with both canonical SAEs, as well as analogous VAEs applied to similar tasks, and propose a hybrid alternative model that circumvents these prior limitations. In terms of theoretical support, we prove that global minima of our proposed model recover certain forms of structured data spread across a union of manifolds. Meanwhile, empirical evaluations on synthetic and real-world datasets substantiate the efficacy of our approach in accurately estimating underlying manifold dimensions and producing sparser latent representations without compromising reconstruction error. In general, we are able to exceed the performance of equivalent-capacity SAEs and VAEs, as well as recent diffusion models where applicable, within domains such as images and language model activation patterns.
Lay Summary: Sparse autoencoders (SAEs) are a common deep neural network architecture capable of modeling low-dimensional latent structure in data. Such structure could reflect, among other things, correlation patterns in large language model activations, or complex natural image manifolds. And yet despite wide-ranging applicability spanning decades, there have been relatively few changes to the original SAE design predicated on three basic components: a deterministic encoder network that maps data samples to a latent representation, a deterministic decoder network that reconstructs the original data samples, and a training loss with sparsity-based regularization. The latter penalizes both the reconstruction error and the complexity of the latent representations, pushing many elements towards zero to achieve the eponymous sparsity. In this work we explore an alternative SAE design, whereby a stochastic encoder network with a novel gating mechanism is introduced with notable benefits, such as the reduction of hyperparameters and the smoothing of local minima that complicate the training of an otherwise complex loss surface. Both theoretical insights and empirical testing on real-world image and language-model data support the efficacy of this approach.
Link To Code: https://github.com/vegetablest-dog/VAEase
Primary Area: Deep Learning->Generative Models and Autoencoders
Keywords: autoencoders, sparse representations, variational autoencoders, low-dimensional manifolds
Submission Number: 9903
Loading