Keywords: Sparse Autoencoders, Circuit analysis, Other
Other Keywords: Compact proofs; Crosscoders
TL;DR: We apply compact proofs to crosscoders, and use this to derive a measure of interaction between crosscoder features.
Abstract: Dictionary learning methods like Sparse Autoencoders (SAEs) and crosscoders attempt to explain a model by decomposing its activations into independent features. Interactions between features hence induce errors in the reconstruction. We formalize this intuition via compact proofs and make four contributions. First, we show how, \textit{in principle}, a compact proof of model performance can be constructed using a crosscoder. Second, we show that an error term arising in this proof can naturally be interpreted as a measure of inteaction between crosscoder features and provide an explicit expression for the interaction term in the Multi-Layer Perceptron (MLP) layers. We then provide two applications of this new interaction measure. In our third contribution we show that the interaction term itself can be used as a differentiable loss penalty. Applying this penalty, we can achieve ``computationally sparse" crosscoders that retain $60\%$ of MLP performance when only keeping a single feature at each datapoint and neuron, compared to $10\%$ in standard crosscoders. Finally, we show that clustering according to our interaction measure provides semantically meaningful feature clusters. Code is available at the following repository: https://github.com/JasonGross/crosscoders-feature-interactions
Submission Number: 73
Loading