Multi-modal Grouping Network for Weakly-Supervised Audio-Visual Video ParsingDownload PDF

Published: 31 Oct 2022, Last Modified: 07 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: Weakly-Supervised Audio-Visual Video Parsing, Multi-modal Grouping
Abstract: The audio-visual video parsing task aims to parse a video into modality- and category-aware temporal segments. Previous work mainly focuses on weakly-supervised approaches, which learn from video-level event labels. During training, they do not know which modality perceives and meanwhile which temporal segment contains the video event. Since there is no explicit grouping in the existing frameworks, the modality and temporal uncertainties make these methods suffer from false predictions. For instance, segments in the same category could be predicted in different event classes. Learning compact and discriminative multi-modal subspaces is essential for mitigating the issue. To this end, in this paper, we propose a novel Multi-modal Grouping Network, namely MGN, for explicitly semantic-aware grouping. Specifically, MGN aggregates event-aware unimodal features through unimodal grouping in terms of learnable categorical embedding tokens. Furthermore, it leverages the cross-modal grouping for modality-aware prediction to match the video-level target. Our simple framework achieves improving results against previous baselines on weakly-supervised audio-visual video parsing. In addition, our MGN is much more lightweight, using only 47.2% of the parameters of baselines (17 MB vs. 36 MB). Code is available at https://github.com/stoneMo/MGN.
TL;DR: We propose a novel weakly-supervised audio-visual video parsing baseline with Multi-modal Grouping Network, namely MGN, for explicitly semantic-aware grouping.
Supplementary Material: pdf
10 Replies

Loading