Keywords: Sparse Autoencoders, Foundational work
TL;DR: We design equivariant sparse autoencoders that adapt to the equivariance in model activations resulting from input transformations, and show that the features they learn outperform regular SAE features in probing tasks.
Abstract: Adapting sparse autoencoders (SAEs) to domains beyond language, such as scientific data with group symmetries, introduces challenges that can hinder their effectiveness. We show that incorporating such group symmetries into the SAEs yields features more useful in downstream tasks. More specifically, we train autoencoders on synthetic images and find that a single matrix can explain how their activations transform as the images are rotated. Building on this, we develop *adaptively equivariant SAEs* that can adapt to the base model's level of equivariance. These adaptive SAEs discover features that lead to superior probing performance compared to regular SAEs, demonstrating the value of incorporating symmetries in mechanistic interpretability tools.
Submission Number: 58
Loading