Third-Person Appraisal Agent: Simulating Human Emotional Reasoning in Text with Large Language Models
Abstract: Emotional reasoning is essential for improving human-AI interactions, particularly in mental health support and empathetic systems. However, current approaches, which primarily map sensory inputs to fixed emotion labels, fail to capture the intricate relationships between motivations, thoughts, and emotions, thereby limiting their ability to generalize across diverse emotional reasoning tasks. To address this, we propose a novel third-person appraisal agent that simulates human-like emotional reasoning through three phases: Primary Appraisal, Secondary Appraisal, and Reappraisal. In the Primary Appraisal phase, a third-person generator powered by a large language model (LLM) infers emotions based on cognitive appraisal theory. The Secondary Appraisal phase uses an evaluator LLM to provide feedback, guiding the generator in refining its predictions. The generator then uses counterfactual reasoning to adjust its process and explore alternative emotional responses. The Reappraisal phase utilizes reinforced fine-tuning (ReFT) by employing a reflective actor-critic framework to further enhance the model’s performance and generalization. This process uses reward signals and learns from appraisal trajectories without human annotations. Our approach outperforms baseline LLMs in various emotion reasoning tasks, demonstrating superior generalization and interpretability. To the best of our knowledge, this is the first cognition-based architecture designed to enhance emotional reasoning in LLMs, advancing AI towards human-like emotional understanding.
Paper Type: Long
Research Area: Human-Centered NLP
Research Area Keywords: Third-person appraisal agent;Cognitive appraisal theory;Large Language Models (LLMs);prompting; Fine-Tuning
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Theory
Languages Studied: pyhotn
Submission Number: 1333
Loading