TL;DR: We present a technique called linear relational concepts (LRC) for finding concept directions corresponding to human-interpretable concepts in large language model activations.
Abstract: Transformer language models (LMs) have been shown to represent concepts as directions in the latent space of hidden activations. However, for any human-interpretable concept, how can we find its direction in the latent space? We present a technique called linear relational concepts (LRC) for finding concept directions corresponding to human-interpretable concepts by first modeling the relation between subject and object as a linear relational embedding (LRE). While the LRE work was mainly presented as an exercise in understanding model representations, we find that inverting the LRE and using earlier object layers results in a powerful technique to find concept directions that perform well as a classifier and causally influence model outputs.
Paper Type: long
Research Area: Interpretability and Analysis of Models for NLP
Contribution Types: Model analysis & interpretability
Languages Studied: English
0 Replies
Loading