Towards Self-Supervised Learning of Global and Object-Centric RepresentationsDownload PDF

Published: 25 Mar 2022, Last Modified: 25 Nov 2024ICLR2022 OSC PosterReaders: Everyone
Keywords: self-supervised learning, object representations, contrastive loss, slot attention, vision transformer, CLEVR
TL;DR: Discuss the interplay of attention, global and per-object contrastive losses, and data augmentation for learning object representations through self-supervision.
Abstract: Self-supervision allows learning meaningful representations of natural images, which usually contain one central object. How well does it transfer to multi-entity scenes? We discuss key aspects of learning structured object-centric representations with self-supervision and validate our insights through several experiments on the CLEVR dataset. Regarding the architecture, we confirm the importance of competition for attention-based object discovery, where each image patch is exclusively attended by one object. For training, we show that contrastive losses equipped with matching can be applied directly in a latent space, avoiding pixel-based reconstruction. However, such an optimization objective is sensitive to false negatives (recurring objects) and false positives (matching errors). Careful consideration is thus required around data augmentation and negative sample selection. Code, datasets, and notebooks are available at https://github.com/baldassarreFe/iclr-osc-22.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/towards-self-supervised-learning-of-global/code)
3 Replies

Loading