Interactive Machine Learning for Multimodal Affective Computing

Published: 2022, Last Modified: 08 Jan 2026ACIIW 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Affective computing [1] is an expanding field that is taking on new forms with the development of more powerful computing devices and better modality fusion techniques. However, most datasets - and their resulting models - are limited to basic modalities (speech, text, and video) [2]. Additionally, the lack of labeled emotion data inclusive of comprehensive modalities has been a hindrance to further development in affective computing [3]. Furthermore, emotional expressions can be inherently ambiguous, thus resulting in multiple equally valid representations in expression and perceptions [4]-[6]. Such ambiguity creates challenges for the modalities required to capture the affective expression and the perceived affective states (annotated labels).
Loading