Contrastive Learning of Global and Local Video RepresentationsDownload PDF

21 May 2021, 20:44 (modified: 21 Jan 2022, 21:01)NeurIPS 2021 PosterReaders: Everyone
Keywords: contrastive learning, self-supervised learning, video representation learning, multimodal learning
TL;DR: We propose a self-supervised approach to learn global-local video representations and demonstrate its superiority over global-only and local-only approaches on a variety of downstream tasks.
Abstract: Contrastive learning has delivered impressive results for various tasks in the self-supervised regime. However, existing approaches optimize for learning representations specific to downstream scenarios, i.e., global representations suitable for tasks such as classification or local representations for tasks such as detection and localization. While they produce satisfactory results in the intended downstream scenarios, they often fail to generalize to tasks that they were not originally designed for. In this work, we propose to learn video representations that generalize to both the tasks which require global semantic information (e.g., classification) and the tasks that require local fine-grained spatio-temporal information (e.g., localization). We achieve this by optimizing two contrastive objectives that together encourage our model to learn global-local visual information given audio signals. We show that the two objectives mutually improve the generalizability of the learned global-local representations, significantly outperforming their disjointly learned counterparts. We demonstrate our approach on various tasks including action/sound classification, lipreading, deepfake detection, event and sound localization.
Supplementary Material: zip
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Code: https://github.com/yunyikristy/global_local
11 Replies

Loading