CACE-Net: Co-guidance Attention and Contrastive Enhancement for Effective Audio-Visual Event Localization

Published: 20 Jul 2024, Last Modified: 05 Aug 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The audio-visual event localization task requires identifying concurrent visual and auditory events from unconstrained videos within a model, locating them, and classifying their category. The efficient extraction and integration of audio and visual modal information have always been challenging in this field. In this paper, we introduce CACE-Net, which differs from most existing methods that solely use audio signals to guide visual information. We propose an audio-visual co-guidance attention mechanism that allows for adaptive bi-directional cross-modal attentional guidance between audio and visual clues, thus reducing inconsistencies between modalities. Moreover, we have observed that existing methods have difficulty distinguishing between similar background and event and lack the fine-grained features for event classification. Consequently, we employ background-event contrast enhancement to increase the discrimination of fused features and fine-tuned pre-trained model to extract more discernible features from complex multimodal inputs. Experiments on the AVE dataset demonstrate that CACE-Net sets a new benchmark in the audio-visual event localization task, proving the effectiveness of our proposed methods in handling complex multimodal learning and event localization in unconstrained videos. Code is available at https://github.com/Brain-Cog-Lab/CACE-Net.
Primary Subject Area: [Content] Multimodal Fusion
Relevance To Conference: In this paper, we propose CACE-Net to address the challenge of integrating audio visual modal information by introducing an audio visual co-guidance mechanism, which significantly improves the accuracy of event localization in unconstrained video content. We utilize contrast learning to accurately differentiate features and perform targeted fine-tuning of pre-trained models to effectively reduce misleading signals from a single modality and achieve effective integration of modal information. CACE-Net's performance on AVE datasets sets new benchmarks, and this work not only improves the capability of multimodal data processing, but also provides a blueprint for the development of future multimedia analytics.
Supplementary Material: zip
Submission Number: 4342
Loading