Abstract: In the domain of affective computing, researchers have sought to enhance the performance of models and algorithms by leveraging the complementarity of multimodal information. However, the rapid emergence of new modalities has outpaced the development of suitable datasets, posing a challenge in keeping up with the advancements in modal sensing technology. The collection and analysis of multimodal data present intricate and substantial tasks. To address the partial missing data challenge within the research community, we have curated a novel homogeneous multimodal gesture emotion recognition dataset, augmenting existing datasets through meticulous analysis. This dataset not only fills the gaps in homogeneous multimodal data but also opens up new avenues for emotion recognition research. Additionally, we propose a pseudo dual-flow network based on this dataset, establishing its potential application in the affective computing community. Experimental findings indicate the feasibility of utilizing traditional visual information and spiking visual information derived from homogeneous multimodal data for visual emotion recognition.
Loading