Keywords: Ambivalence, hesitancy, affective computing, emotion recognition in videos, multimodal, eHealth, behavioral change
TL;DR: We introduce new dataset for Ambivalence/Hesitancy recognition in videos with 224 subjects and 1118 videos. Data and code are publically available.
Abstract: This paper introduces the Behavioral Ambivalence/Hesitancy (BAH) dataset collected for the Ambivalence/Hesitancy (A/H) recognition task in videos. In particular, this task involves recognizing conflicting emotions linked to A/H from question-and-answer videos captured for behavior analysis. The dataset contains videos from 224 subjects with different age, ethnicity collected across 9 Canadian provinces via webcam through our developed web platform. Each user answers to 7 questions that we designed to induce Ambivalence/Hesitancy. Each video captures the response for one question in the subject's environment totaling 1,118 videos for a total duration of 8.26 hours with 1.5 hours of A/H. BAH is a first and unique dataset for Ambivalence/Hesitancy recognition. Our behavioral team annotated timestamp segment where A/H occurs providing frame- and video-level annotation, in addition to used cues for annotation such as face, audio, body, and language. Video transcripts and their timestamps as well as per-frame cropped and aligned faces are also included. This work offers initial baselines for A/H recognition in videos at frame- and video-level with different analysis with single and multimodal setups. The data, code, and pretrained weights are publicly accessible.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 20025
Loading