Active Contrastive Learning of Audio-Visual Video RepresentationsDownload PDF

Published: 12 Jan 2021, Last Modified: 05 May 2023ICLR 2021 PosterReaders: Everyone
Keywords: self-supervised learning, contrastive representation learning, active learning, audio-visual representation, video recognition
Abstract: Contrastive learning has been shown to produce generalizable representations of audio and visual data by maximizing the lower bound on the mutual information (MI) between different views of an instance. However, obtaining a tight lower bound requires a sample size exponential in MI and thus a large set of negative samples. We can incorporate more samples by building a large queue-based dictionary, but there are theoretical limits to performance improvements even with a large number of negative samples. We hypothesize that random negative sampling leads to a highly redundant dictionary that results in suboptimal representations for downstream tasks. In this paper, we propose an active contrastive learning approach that builds an actively sampled dictionary with diverse and informative items, which improves the quality of negative samples and improves performances on tasks where there is high mutual information in the data, e.g., video classification. Our model achieves state-of-the-art performance on challenging audio and visual downstream benchmarks including UCF101, HMDB51 and ESC50.
One-sentence Summary: We propose an active learning approach to improve negative sampling for contrastive learning and demonstrate it on learning audio-visual representations from videos.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Code: [![github](/images/github_icon.svg) yunyikristy/CM-ACC](https://github.com/yunyikristy/CM-ACC)
Data: [AudioSet](https://paperswithcode.com/dataset/audioset), [ESC-50](https://paperswithcode.com/dataset/esc-50), [HMDB51](https://paperswithcode.com/dataset/hmdb51), [Kinetics](https://paperswithcode.com/dataset/kinetics), [Kinetics 400](https://paperswithcode.com/dataset/kinetics-400-1), [Kinetics-700](https://paperswithcode.com/dataset/kinetics-700), [UCF101](https://paperswithcode.com/dataset/ucf101)
11 Replies

Loading