Mixed In Time And Modality: Curse Or Blessingƒ Cross-Instance Data Augmentation for Weakly Supervised Multimodal Temporal FusionDownload PDFOpen Website

2022 (modified: 19 Apr 2023)ICASSP 2022Readers: Everyone
Abstract: In multimodal video event localization, we usually leverage feature fusion across different axes, such as the modality and temporal axes, for better context. To reduce the costs of detailed annotations, recent solutions explore weakly supervised settings. However, we observe that when feature fusion meets weakly supervised localization, problems can occur. It may cause "feature cross-interference", which produces a smearing effect on the localization result and can’t be effectively supervised with conventional multiple instance learning loss. We verify it quantitatively on the audio-visual video parsing (AVVP) task, and propose a cross-instance data-augmentation framework, which can preserve the benefits of feature fusion while providing explicit feedbacks for feature cross-interference. We show that our method can enhance performance of existing models on two weakly supervised audio-visual localization tasks, i.e. AVVP and AVE.
0 Replies

Loading