Fine-Grained Emotion Recognition with In-Context Learning: A Prototype Theory Approach

ICLR 2025 Conference Submission98 Authors

13 Sept 2024 (modified: 13 Oct 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: fine-grained emotion recognition, in-context learning, ICL, large language model, LLMs
Abstract: In-context learning (ICL) achieves remarkable performance in various domains such as knowledge acquisition, commonsense reasoning, and semantic understanding. However, its effectiveness deteriorates significantly in emotion detection tasks, particularly in fine-grained emotion recognition. The reasons behind this decline still remain unclear. In this paper, we explore the underlying reasons of ICL's suboptimal performance through the lens of prototype theory. Our investigation reveals that ICL aligns with the principles of prototype theory when applied to fine-grained emotion recognition tasks. According to prototype theory, effective emotion recognition requires: Referencing well-represented emotional prototypes that are similar to the query emotions, and making predictions based on the closest emotional similarity. Building on this insight, ICL has three main shortcomings: (1) It uses oversimplified single-emotion labels for prototypes, leading to inaccurate emotion representation. (2) It references semantically similar but emotionally distant prototypes. (3) It considers all emotion categories as candidates, leading to interference from irrelevant emotions and inaccurate predictions. To address these shortcomings, we propose an Emotion Context Learning method (E-ICL) for fine-grained emotion recognition. E-ICL first employs a dynamic soft-label strategy to create multi-dimensional emotional labels for accurate prototype representation. It then selects emotionally similar prototypes as references for emotion prediction. Finally, it uses an emotion exclusion strategy to eliminate interference from dissimilar emotions by selecting similar emotions as candidates, resulting in more robust and accurate predictions. Note that our approach is implemented with the aid of a plug-and-play emotion auxiliary model, requiring no additional training. Extensive experiments conducted on fine-grained emotion datasets—EDOS, Empathetic-Dialogues, EmpatheticIntent, and GoEmotions—demonstrate that E-ICL significantly outperforms existing methods in emotion prediction performance. Moreover, even when the emotion auxiliary model accounts for less than 10\% of the LLMs' capacity, E-ICL consistently boosts LLM performance by over 4\% across multiple datasets.
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 98
Loading