InversionView: A General-Purpose Method for Reading Information from Neural Activations

Published: 24 Jun 2024, Last Modified: 31 Jul 2024ICML 2024 MI Workshop OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: interpretability, explainability, mechanistic interpretability
TL;DR: We develop a method that reads out information from neural activations.
Abstract: The inner workings of neural networks can be better understood if we can fully decipher the information encoded in neural activations. In this paper, we argue that this information is embodied by the subset of inputs that give rise to similar activations. Computing such subsets is nontrivial as the input space is exponentially large. We propose InversionView, which allows us to practically inspect this subset by sampling from a trained decoder model conditioned on activations. This helps uncover the information content of activation vectors, and facilitates understanding of the algorithms implemented by transformer models. We present four case studies where we investigate models ranging from small transformers to GPT-2. In these studies, we demonstrate the characteristics of our method, show the distinctive advantages it offers, and provide causally verified circuits.
Submission Number: 65
Loading