Keywords: Mechanistic Interpretability, Sparse Autoencoders, LLMs, Training Dynamics
TL;DR: Using our novel SAE-Track method, we conduct a detailed mechanistic study of LLM training dynamics by tracking the semantic and geometric evolution of features.
Abstract: Understanding training dynamics and feature evolution is crucial for the mechanistic interpretability of large language models (LLMs). Although sparse autoencoders (SAEs) have been used to identify features within LLMs, a clear picture of how these features evolve during training remains elusive. In this study, we (1) introduce SAE-Track, a novel method for efficiently obtaining a continual series of SAEs, providing the foundation for a mechanistic study that covers (2) the semantic evolution of features, (3) the underlying processes of feature formation, and (4) the directional drift of feature vectors. Our work provides new insights into the dynamics of features in LLMs, enhancing our understanding of training mechanisms and feature evolution.
Primary Area: interpretability and explainable AI
Submission Number: 13500
Loading