A Third-Person Appraisal Agent: Learning to Reason About Emotions in Conversational Contexts

20 Sept 2024 (modified: 13 Feb 2025)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: emotion recognition in conversation, LLM agent, emotion reasoning, reinforcement learning
Abstract:

Emotion reasoning is crucial for achieving human-like emotional understanding in Emotion Recognition in Conversation (ERC). Current ERC datasets provide only emotion-labeled utterances, lacking the rich annotations necessary for emotion reasoning. Although Large Language Models (LLMs) show promise in generating rich emotional knowledge, they still struggle to apply this knowledge effectively for emotion reasoning. To address these challenges, we propose a learning framework based on cognitive appraisal theory, utilizing an agent powered by LLMs to learn emotion reasoning from a third-person perspective, which we refer to as the third-person appraisal agent. This learning framework comprises two phases: self-evaluation and meta-evaluation. In the self-evaluation phase, the agent generates appraisals essential for inferring emotions, incorporating counterfactual thinking to refine its appraisals. The meta-evaluation phase uses reflective actor-critic reinforcement learning to train the agent to generate accurate appraisals during testing. The training samples are appraisals generated during the self-evaluation phase, which eliminates the need for human annotations. By fine-tuning a specialized LLM in this framework, our approach significantly outperforms LLM baselines across ERC tasks, demonstrating improved reasoning and generalization across various dialogue datasets. Additionally, we provide interpretable results that clarify the model’s reasoning process behind its predictions. To the best of our knowledge, this research is the first to apply cognition-based methods to enhance LLMs' emotional reasoning capabilities, marking a significant advancement toward achieving human-like emotional understanding in artificial intelligence.

Primary Area: interpretability and explainable AI
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2193
Loading