Keywords: Emotion Interpretation, Large Language Model
Abstract: Affect computing is crucial in fields such as human-computer interaction, healthcare, and market research, yet emotion's ambiguity and subjectivity challenge current recognition techniques. We propose Emotion Interpretation (EI), a task that interprets the reasons behind emotions, and create the Emotion Interpretation Benchmark (EIBench) using a VLLM-assisted dataset construction method, Coarse-to-Fine Self-Ask (CFSA), with carefully human in-the-loop annotation. EIBench includes 1,615 basic and 50 multi-faceted complex emotion interpretation samples. Experiments show limited proficiency of existing models in EI, with the best achieving 62.41% accuracy in the zero-shot setting and some performing lower than the text-only LLaMA-3 model (6.26%) in the caption-provided setting. Different personas assigned also differ the benchmark results. Overcoming the challenges posed by EI can result in more empathetic AI systems, thereby enhancing human-computer interaction and emotion-sensitive applications.
Primary Area: datasets and benchmarks
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6604
Loading