Keywords: Large Language Model; Pre-Training; Mechanistic Interpretability; Training Dynamics; Crosscoder
TL;DR: We track linear interpretable feature evolution across pre-training snapshots using a sparse dictionary learning method called crosscoders.
Abstract: Language models obtain extensive capabilities through pre-training. However, the pre-training dynamics remains a black box. In this work, we track linear interpretable feature evolution across pre-training snapshots using a sparse dictionary learning method called crosscoders. We find that most features begin to form around a specific point, while more complex patterns emerge in later training stages. Feature attribution analyses reveal causal connections between feature evolution and downstream performance. Our feature-level observations are highly consistent with previous findings on Transformer's two-stage learning process, which we term a statistical learning phase and a feature learning phase. Our work opens up the possibility to track fine-grained representation progress during language model learning dynamics. Our code is available at https://github.com/OpenMOSS/Language-Model-SAEs.
Primary Area: interpretability and explainable AI
Submission Number: 10908
Loading