Emotion Recognition in Conversation with Multi-step Prompting Using Large Language Model

Published: 01 Jan 2024, Last Modified: 17 Jul 2025HCI (20) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Emotion recognition plays a crucial role in computer science, particularly in enhancing human-computer interactions. The process of emotion labeling remains time-consuming and costly, thereby impeding efficient dataset creation. Recently, large language models (LLMs) have demonstrated adaptability across a variety of tasks without requiring task-specific training. This indicates the potential of LLMs to recognize emotions even with fewer emotion labels. Therefore, we assessed the performance of an LLM in emotion recognition using two established datasets: MELD and IEMOCAP. Our findings reveal that for emotion labels with few training samples, the performance of the LLM approaches or even exceeds that of SPCL, a leading model specializing in text-based emotion recognition. In addition, inspired by the Chain of Thought, we incorporated a multi-step prompting technique into the LLM to further enhance its discriminative capacity between emotion labels. The results underscore the potential of LLMs to reduce the time and costs of emotion data labeling.
Loading