Investigating Sensitive Directions in GPT-2: An Improved Baseline and Comparative Analysis of SAEs

Published: 10 Oct 2024, Last Modified: 09 Nov 2024SciForDL PosterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: By measuring how much the next token prediction probabilities change by perturbing activations along specific directions, we evaluate sparse autoencoders.
Abstract:

Sensitive directions experiments attempt to understand the internal computation of Language Models (LMs) by measuring how much the next token prediction probabilities change by perturbing activations along specific directions. We extend the sensitive directions work by introducing an improved baseline for perturbation directions. We demonstrate that KL divergence for Sparse Autoencoder (SAE) reconstruction errors are no longer pathologically high compared to the improved baseline. We also show that feature directions uncovered by SAEs have varying impacts on model outputs depending on the SAE's sparsity, with lower (L0) SAE feature directions exerting a greater influence. Additionally, we find that end-to-end SAEs do not exhibit stronger effects on model outputs compared to traditional SAEs.

Style Files: I have used the style files.
Submission Number: 43
Loading