TL;DR: We find low rank adapting models after SAE training leads to pareto improvements in model interpretability at a fraction of the cost of existing methods.
Abstract: Sparse autoencoders (SAEs) aim to decompose language model representations into a sparse set of linear latent vectors. Recent works have improved SAEs using language model gradients, but these techniques require many expensive backward passes during training and still cause a significant increase in cross entropy loss when SAE reconstructions are inserted into the model. In this work, we improve on these limitations by taking a fundamentally different approach: we use low-rank adaptation (LoRA) to finetune the *language model itself* around a previously trained SAE. We analyze our method across SAE sparsity, SAE width, language model size, LoRA rank, and model layer on the Gemma Scope family of SAEs. In these settings, our method reduces the cross entropy loss gap by 30% - 55% when SAEs are inserted during the forward pass. We also find that compared to end-to-end (e2e) SAEs, our approach achieves the same downstream cross entropy loss 3$\times$ to 20$\times$ faster on Gemma-2-2B and 2$\times$ to 10$\times$ faster on Llama-3.2-1B. We further show that our technique improves downstream metrics and can adapt multiple SAEs at once. Our results demonstrate that improving model interpretability is not limited to post-hoc SAE training; Pareto improvements can also be achieved by directly optimizing the model itself.
Lay Summary: Large language models (LLMs) have demonstrated profound capabilities. In an effort to ensure these models are not doing something us humans disapprove of, researchers are interested in understanding the underlying mechanisms these models use to function. One tool researchers use has recently gained traction for being able to translate the models' internal representations into human-interpretable concepts.
Unfortunately, the tool seems to fall short of fully interpreting the model's internal thoughts, as when we restrict our lens to only the interpretable concepts the tool finds, the model performs significantly worse. In other words, we cannot be confident the model is faithful to our interpretation of its internal mechanism.
In this work, we explore how we can cheaply train the model to more faithfully use the interpretable concepts we do identify with the tool without sacrificing its profound capabilities. We can therefore be more confident than before in understanding what this modified model is doing.
Link To Code: https://github.com/matchten/LoRA-Models-for-SAEs
Primary Area: Social Aspects->Accountability, Transparency, and Interpretability
Keywords: Mechanistic Interpretability, Sparse Autoencoders, Finetuning, Steering
Submission Number: 4221
Loading