Emergence of Hierarchical Emotion Representations in Large Language Models

Published: 10 Oct 2024, Last Modified: 09 Nov 2024SciForDL PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: As large language models (LLMs) increasingly power conversational agents, understanding how they represent, predict, and influence human emotions is crucial for ethical deployment. By analyzing probabilistic dependencies between emotional states in model outputs, we uncover hierarchical structures in LLMs' emotion representations. Our findings show that larger models, such as LLaMA 3.1 (405B parameters), develop more complex hierarchies. We also find that better emotional modeling enhances persuasive abilities in synthetic negotiation tasks, with LLMs that more accurately predict counterparts' emotions achieving superior outcomes. Additionally, we explore how persona biases, such as gender and socioeconomic status, affect emotion recognition, revealing frequent misclassifications of minority personas. This study contributes to both the scientific understanding and ethical considerations of emotion modeling in LLMs.
Style Files: I have used the style files.
Submission Number: 42
Loading