Exploring Cross-Video and Cross-Modality Signals for Weakly-Supervised Audio-Visual Video ParsingDownload PDF

21 May 2021, 20:42 (modified: 22 Jan 2022, 00:44)NeurIPS 2021 PosterReaders: Everyone
Keywords: Audio-visual Video Parsing, Weakly-supervised Learning, Vision Applications and Systems
Abstract: The audio-visual video parsing task aims to temporally parse a video into audio or visual event categories. However, it is labor intensive to temporally annotate audio and visual events and thus hampers the learning of a parsing model. To this end, we propose to explore additional cross-video and cross-modality supervisory signals to facilitate weakly-supervised audio-visual video parsing. The proposed method exploits both the common and diverse event semantics across videos to identify audio or visual events. In addition, our method explores event co-occurrence across audio, visual, and audio-visual streams. We leverage the explored cross-modality co-occurrence to localize segments of target events while excluding irrelevant ones. The discovered supervisory signals across different videos and modalities can greatly facilitate the training with only video-level annotations. Quantitative and qualitative results demonstrate that the proposed method performs favorably against existing methods on weakly-supervised audio-visual video parsing.
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Code: https://github.com/GenjiB/CM-Co-Occurrence-AVVP
9 Replies

Loading