Analyzing Key Neurons in Large Language Models

ACL ARR 2024 June Submission1716 Authors

14 Jun 2024 (modified: 07 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) possess vast amounts of knowledge within their parameters, prompting research into methods for locating and editing this knowledge. Previous investigations have primarily focused on fill-in-the-blank tasks and locating entity-related (\emph{usually single-token facts}) information in relatively small-scale language models. However, several key questions remain unanswered: (1) \textit{How can we effectively locate query-relevant neurons in contemporary autoregressive LLMs, such as LLaMA and Mistral? } (2) \textit{How can we address the challenge of long-form text generation?} (3) \textit{Are there localized knowledge regions in LLMs?} In this study, we introduce Neuron Attribution-Inverse Cluster Attribution (NA-ICA), a novel architecture-agnostic framework capable of identifying key neurons in LLMs. NA-ICA allows for the examination of long-form answers beyond single tokens by employing the proxy task of multi-choice question answering. To evaluate the effectiveness of our detected key neurons, we construct two multi-choice QA datasets spanning diverse domains and languages. Empirical evaluations demonstrate that NA-ICA outperforms baseline methods significantly. Moreover, analysis of neuron distributions reveals the presence of visible localized regions, particularly within different domains. Finally, we demonstrate the potential applications of our detected key neurons in knowledge editing and neuron-based prediction.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: knowledge tracing/discovering/inducing
Contribution Types: Model analysis & interpretability
Languages Studied: English, Chinese, French, Japanese, Arabic, Russian
Submission Number: 1716
Loading