Accelerating Sparse Autoencoder Training via Layer-Wise Transfer Learning in Large Language Models

Published: 21 Sept 2024, Last Modified: 06 Oct 2024BlackboxNLP 2024EveryoneRevisionsBibTeXCC BY 4.0
Track: Full paper
Keywords: Sparse Autoencoders (SAEs), Transfer Learning, Mechanistic Interpretability, Large Language Models (LLMs), Feature Transfer
TL;DR: Layer-wise transfer learning accelerates SAE training while improving convergence speed and performances, thereby enhancing LLMs interpretability.
Abstract: Sparse AutoEncoders (SAEs) have gained popularity as a tool for enhancing the interpretability of Large Language Models (LLMs). However, training SAEs can be computationally intensive, especially as model complexity grows. In this study, the potential of transfer learning to accelerate SAEs training is explored by capitalizing on the shared representations found across adjacent layers of LLMs. Our experimental results demonstrate that fine-tuning SAEs using pre-trained models from nearby layers not only maintains but often improves the quality of learned representations, while significantly accelerating convergence. These findings indicate that the strategic reuse of pretrained SAEs is a promising approach, particularly in settings where computational resources are constrained.
Copyright PDF: pdf
Submission Number: 88
Loading