Spatially Consistent Representation Learning
Abstract: Self-supervised learning has been widely used to obtain
transferrable representations from unlabeled images. Especially, recent contrastive learning methods have shown impressive performances on downstream image classification
tasks. While these contrastive methods mainly focus on generating invariant global representations at the image-level
under semantic-preserving transformations, they are prone
to overlook spatial consistency of local representations and
therefore have a limitation in pretraining for localization
tasks such as object detection and instance segmentation.
Moreover, aggressively cropped views used in existing contrastive methods can minimize representation distances between the semantically different regions of a single image.
In this paper, we propose a spatially consistent representation learning algorithm (SCRL) for multi-object and
location-specific tasks. In particular, we devise a novel
self-supervised objective that tries to produce coherent spatial representations of a randomly cropped local region according to geometric translations and zooming operations.
On various downstream localization tasks with benchmark
datasets, the proposed SCRL shows significant performance
improvements over the image-level supervised pretraining as well as the state-of-the-art self-supervised learning
methods. Code is available at https://github.com/
kakaobrain/scrl.
0 Replies
Loading