Keywords: large language models (LLMs), human beliefs encoding, representation space
TL;DR: We present a study on how and where personas---defined by distinct sets of human characteristics, values, and beliefs---are encoded in the representation space of large language models (LLMs).
Abstract: We present a study on how and where personas---defined by distinct sets of human characteristics, values, and beliefs---are encoded in the representation space of large language models (LLMs). Such insights can improve the model's interpretability and enable more precise control over the generative process.
Using a range of dimension reduction and pattern recognition methods, we first identify the model layers that show the greatest divergence in encoding these representations. We then analyze the activations within a selected layer to examine how specific personas are encoded relative to others, including their shared and distinct embedding spaces.
We find that, across multiple pre-trained decoder-only LLMs,
the analyzed personas show large differences in representation space only within the final third of the decoder layers.
When we look at some of the later layers, we observe overlapping activations for specific ethical perspectives---such as moral nihilism and utilitarianism---suggesting a degree of polysemy. In contrast, political ideologies like conservatism and liberalism appear to be represented in more distinct regions.
These findings improve our understanding of how LLMs represent information internally and allow for greater control over how specific human traits are expressed in their responses.
Public: Yes
Track: Main-Long
Submission Number: 2
Loading