The Importance of Prompt Tuning for Automated Neuron Explanations

NeurIPS 2023 Workshop ATTRIB Submission38 Authors

Published: 27 Oct 2023, Last Modified: 08 Dec 2023ATTRIB PosterEveryoneRevisionsBibTeX
Keywords: Interpretability, Mechanistic interpretability, Explainability, Language Models
TL;DR: We study the importance of Prompt Tuning on Automated LLM neuron explanations, and propose an improved prompt that improves explanation quality and efficiency.
Abstract: Recent advances have greatly increased the capabilities of large language models (LLMs), but our understanding of the models and their safety has not progressed as fast. In this paper we aim to understand LLMs deeper by studying their individual neurons. We build upon previous work showing large language models such as GPT-4 can be useful in explaining what each neuron in a language model does. Specifically, we analyze the effect of the prompt used to generate explanations and show that reformatting the explanation prompt in a more natural way can significantly improve neuron explanation quality and greatly reduce computational cost. We demonstrate the effects of our new prompts in three different ways, incorporating both automated and human evaluations.
Submission Number: 38