Keywords: Localization; editing; finetuning;truthfulness
Abstract: A basic aspiration for interpretability research in large language models is to "localize" semantically meaningful behaviors to particular components within the LLM. There are various heuristics for finding candidate locations within the LLM.
Once a candidate localization is found, it can be assessed by editing the internal representations at the corresponding localization and checking whether this induces model behavior that is consistent with the semantic interpretion of the localization.
The question we address here is: how strong is the evidence provided by such edits?
To assess localization, we want to assess the effect of the optimal intervention at a particular location.
The key new technical tool is a way of adapting LLM alignment techniques to find such optimal localized edits.
With this tool in hand, we give an example where the edit-based evidence for localization appears strong, but where localization clearly fails. Indeed, we find that optimal edits at \emph{random} localizations can be as effective as aligning the full model. In aggregate, our results suggest that merely observing that localized edits induce targeted changes in behavior provides little to no evidence that these locations actually encode the target behavior.
Submission Number: 17
Loading