Abstract: Most existing emotion analysis emphasizes which emotion arises (e.g., happy, sad, angry) but neglects the deeper why. We propose Emotion Interpretation (EI), focusing on causal factors--whether explicit (e.g., observable objects, interpersonal interactions) or implicit (e.g., cultural context, off-screen events)--that drive emotional responses. Unlike traditional emotion recognition, EI tasks require reasoning about triggers instead of mere labeling. To facilitate EI research, we present EIBench, a large-scale benchmark encompassing \num 1615 basic EI samples and \num 50 complex EI samples featuring multifaceted emotions. Each instance demands rationale-based explanations rather than straightforward categorization. We further propose a Coarse-to-Fine Self-Ask (CFSA) annotation pipeline, which guides Vision-Language Models (VLLMs) through iterative question-answer rounds to yield high-quality labels at scale. Extensive evaluations on open-source and proprietary large language models under four experimental settings reveal consistent performance gaps--especially for more intricate scenarios--underscoring EI's potential to enrich empathetic, context-aware AI applications. Our benchmark and methods are publicly available at \href https://github.com/Lum1104/EIBench https://github.com/Lum1104/EIBench , offering a foundation for advanced multimodal causal analysis and next-generation affective computing.
Loading