On Relation-Specific Neurons in Large Language Models

ACL ARR 2025 May Submission2728 Authors

19 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: In large language models (LLMs), certain neurons can store distinct pieces of knowledge learned during pretraining. While factual knowledge typically appears as a combination of relations and entities, it remains unclear whether some neurons focus on a relation itself -- independent of any entity. We hypothesize such neurons detect a relation in the input text and guide generation involving such a relation.To investigate this, we study the LLama-2 family on a chosen set of relations, with a statistics-based method.Our experiments demonstrate the existence of relation-specific neurons.We measure the effect of selectively deactivating candidate neurons specificto relation $r$ on the LLM's ability to handle (1) facts involving relation $r$ and (2) facts involving a different relation $r' \neq r$. With respect to their capacity for encoding relationinformation, we give evidence for the following three propertiesof relation-specific neurons. $\textbf{(i) Neuron cumulativity.}$ Multiple neurons jointly contribute to processing facts involving relation $r$, with no single neuron fully encoding a fact in $r$ on its own. $\textbf{(ii) Neuron versatility.} $ Neurons can be shared across multiple closely related aswell as less related relations. In addition, some relation neurons transfer across languages. $\textbf{(iii) Neuron interference.} $ Deactivating neurons specific to one relation can improveLLMs' factual recall performance for facts of other relations.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: knowledge tracing/discovering/inducing, data influence, hierarchical & concept explanations
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data analysis
Languages Studied: primarily English, but aslo include German, Spanish, French, Chinese, Japanese
Submission Number: 2728
Loading