everyone
since 04 Oct 2024">EveryoneRevisionsBibTeXCC BY 4.0
Emotion reasoning is crucial for achieving human-like emotional understanding in Emotion Recognition in Conversation (ERC). Current ERC datasets provide only emotion-labeled utterances, lacking the rich annotations necessary for emotion reasoning. Although Large Language Models (LLMs) show promise in generating rich emotional knowledge, they still struggle to apply this knowledge effectively for emotion reasoning. To address these challenges, we propose a learning framework based on cognitive appraisal theory, utilizing an agent powered by LLMs to learn emotion reasoning from a third-person perspective, which we refer to as the third-person appraisal agent. This learning framework comprises two phases: self-evaluation and meta-evaluation. In the self-evaluation phase, the agent generates appraisals essential for inferring emotions, incorporating counterfactual thinking to refine its appraisals. The meta-evaluation phase uses reflective actor-critic reinforcement learning to train the agent to generate accurate appraisals during testing. The training samples are appraisals generated during the self-evaluation phase, which eliminates the need for human annotations. By fine-tuning a specialized LLM in this framework, our approach significantly outperforms LLM baselines across ERC tasks, demonstrating improved reasoning and generalization across various dialogue datasets. Additionally, we provide interpretable results that clarify the model’s reasoning process behind its predictions. To the best of our knowledge, this research is the first to apply cognition-based methods to enhance LLMs' emotional reasoning capabilities, marking a significant advancement toward achieving human-like emotional understanding in artificial intelligence.