Video2Reaction: Mapping Video to Audience Reaction Distribution in the Wild

ICLR 2026 Conference Submission20226 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: multimodal, multimedia, sentiment analysis, benchmark
TL;DR: We introduce Video2Reaction, a large-scale dataset for modeling the distribution of induced audience reactions to movie clips, enabling research on video-based emotional impact and engagement prediction.
Abstract: Understanding audience reactions to video content is crucial for improving content creation, recommendation systems, and media analysis. We introduce $\textbf{Video2Reaction}$, a multimodal dataset that maps short movie segments to the $\textit{distributional induced emotional reactions}$ of viewers in the wild, as expressed through social media. Unlike most prior datasets that focus on $\textit{perceived emotions}$—i.e., the emotions portrayed by characters in a movie clip—$\textbf{Video2Reaction}$ centers on the induced emotions triggered by the movie clip. Additionally, we model these reactions as $\textit{distributions}$ over categorical emotions, rather than reducing them to a single dominant label, enabling fine-grained learning of collective emotional responses. $\textbf{Video2Reaction}$ can support a range of applications, including audience reaction prediction for new video content, emotion-aware video retrieval, and content optimization based on expected viewer engagement. By providing a comprehensive benchmark for distributional video-to-reaction modeling, $\textbf{Video2Reaction}$ advances the study of audience engagement and emotional impact in multimedia content. The dataset is available at https://huggingface.co/datasets/video2reac/Video2Reaction.
Primary Area: datasets and benchmarks
Submission Number: 20226
Loading