One Leaf Reveals the Season: Occlusion-Based Contrastive Learning with Semantic-Aware Views for Efficient Visual Representation
TL;DR: We propose Occluded Image Contrastive Learning (OCL), a scalable method that contrasts masked image patches to learn semantic concepts efficiently without hand-crafted augmentations.
Abstract: This paper proposes a scalable and straightforward pre-training paradigm for efficient visual conceptual representation called occluded image contrastive learning (OCL). Our OCL approach is simple: we randomly mask patches to generate different views within an image and contrast them among a mini-batch of images. The core idea behind OCL consists of two designs. First, masked tokens have the potential to significantly diminish the conceptual redundancy inherent in images, and create distinct views with substantial fine-grained differences on the semantic concept level instead of the instance level. Second, contrastive learning is adept at extracting high-level semantic conceptual features during the pre-training, circumventing the high-frequency interference and additional costs associated with image reconstruction. Importantly, OCL learns highly semantic conceptual representations efficiently without relying on hand-crafted data augmentations or additional auxiliary modules. Empirically, OCL demonstrates high scalability with Vision Transformers, as the ViT-L/16 can complete pre-training in 133 hours using only 4 A100 GPUs, achieving 85.8\% accuracy in downstream fine-tuning tasks. Code is available at https://github.com/XiaoyuYoung/OCL.
Lay Summary: The success of the pre-training with large models in NLP has prompted people to replicate this paradigm in the field of vision. However, visual pre-training requires a huge amount of computing resources that we cannot afford. Therefore, we designed a method to occlude most of the image to reduce the amount of pre-training calculations. This allows us to build larger and deeper visual models and speed up pre-training.
Primary Area: Deep Learning->Self-Supervised Learning
Keywords: contrastive learning, self-supervised pre-training
Submission Number: 9554
Loading