Interpretable Emotion Attribution in Social Graphs: A Comparative Analysis of Rule-Based, Transformer, and LLM Models

Published: 15 Mar 2026, Last Modified: 16 Mar 20262026 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Explainable AI (XAI), Social Graph Analysis, Emotion Attribution, Interpretable NLP, Dialogue Relation Extraction, Multi-turn Dialogue Understanding
TL;DR: We study interpretable emotion attribution in multi-turn dialogues by modeling emotional relationships between entities as labeled edges in a social graph.
Abstract: Emotion attribution in social graphs requires inferring directed emotional attitudes between entities in complex, multi-turn dialogues. While transformer models dominate the field, they often lack the transparency required for social science applications. We present a systematic comparison of three modeling paradigms for this task: a fully interpretable rule-based system, a fine-tuned RoBERTa-large model, and a few-shot Llama-3-8B. Utilizing the DialogRE dataset, we demonstrate that incorporating a 3-turn conversational context significantly improves attribution accuracy across paradigms. Crucially, our results show that the interpretable rule-based system achieves a competative F1 score, making it statistically indistinguishable from the state-of-the-art RoBERTa model. In contrast, we find that few-shot large language models exhibit poor performance in emotion attribution of semantic relations, performing below the rule-based baseline. We prove that interpretability does not necessitate a performance trade-off in social emotion analysis.
Submission Number: 76
Loading