Track: long paper (up to 4 pages)
Keywords: Mechanistic Interpretability, Finetuning, Sparse Autoencoders, Steering
TL;DR: We find low rank adapting models after SAE training leads to pareto improvements in model interpretability at a fraction of the cost of existing methods.
Abstract: Sparse autoencoders (SAEs) decompose language model representations into a sparse set of linear latent vectors. Recent work has improved SAEs using language model gradients, but these techniques are computationally expensive and still increase downstream loss when using the SAE reconstructions. We improve on these limitations by taking a fundamentally different approach: we use low-rank adaptations (LoRA) to finetune the *language model itself* around a pretrained SAE. We analyze our method across SAE sparsity, SAE width, LLM size, LoRA rank, and model layer on the Gemma Scope family of SAEs. In these settings, our method reduces the cross entropy loss gap by 30% to 55% when SAEs are inserted during the forward pass. Compared to end-to-end (e2e) SAEs, our approach achieves the same downstream cross entropy loss 3$\times$ to 20$\times$ faster on Gemma-2-2B and 2$\times$ to 10$\times$ faster on Llama-3.2-1B. Furthermore, our technique improves downstream metrics and can adapt multiple SAEs at once. Our results demonstrate that improving model interpretability is not limited to post-hoc SAE training; Pareto improvements can also be achieved by directly optimizing the model itself.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 52
Loading