Keywords: Mechanistic Interpretability, Sparse Autoencoder, Circuit Discovery
TL;DR: We propose utilizing Sparse Autoencoders and a variant name Transcoders featured along with an efficient attribution strategy named Hierarchical Attribution. We report new findings in the IOI task compared to prior work.
Abstract: Circuit analysis of any certain model behavior is a central task in mechanistic interpretability. We introduce our circuit discovery pipeline with sparse autoencoders (SAEs) and a variant called transcoders. With these two modules inserted into the model, the model's computation graph with respect to OV and MLP circuits becomes strictly linear. Our methods do not require linear approximation to compute the causal effect of each node. This fine-grained graph identifies end-to-end and local circuits accounting for either logits or intermediate features. We can scalably apply this pipeline with a technique called Hierarchical Attribution. We analyze three kinds of circuits in GPT2-Small, namely bracket, induction, and Indirect Object Identification circuits. Our results reveal new findings underlying existing discoveries.
Submission Number: 55
Loading