Sparse Autoencoders for transformer-based language models are typically defined independently per layer. In this work we analyze statistical relationships between features in adjacent layers to understand how features evolve through a forward pass. We provide a graph visualization interface for features and their most similar next-layer neighbors, and build communities of related features across layers. We find that a considerable amount of features are passed through from a previous layer, some features can be expressed as quasi-boolean combinations of previous features, and some features become more specialized in later layers.
Release Opt Out: No, I don't wish to opt out of paper release. My paper should be released.
Keywords: sae, sparse autoencoder, mechanistic interpretability, interpretability, ai safety
Abstract:
Submission Number: 32
Loading