Fake & Square: Training Self-Supervised Vision Transformers with Synthetic Data and Synthetic Hard Negatives
Keywords: self-supervised learning, synthetic hard negatives, synthetic data, vision transformers, contrastive learning, representation learning
TL;DR: We enhance self-supervised vision transformer training by combining synthetic hard negatives in feature space with synthetic training data in image space, reducing dependency on large real-world datasets.
Abstract: This paper does not introduce a new method per se. Instead, we build on existing self-supervised learning approaches for vision, drawing inspiration from the adage "fake it till you make it". While contrastive self-supervised learning has achieved remarkable success, it typically relies on vast amounts of real-world data and carefully curated hard negatives. To explore alternatives to these requirements, we investigate two forms of "faking it" in vision transformers. First, we examine the feasibility of generating synthetic hard negatives in the representation space, creating diverse and challenging contrasts. Second, we study the potential of generative models for unsupervised representation learning, leveraging synthetic data to augment sample diversity. Our framework -dubbed Syn2Co- combines both approaches and evaluates whether synthetically enhanced training can lead to more robust and transferable visual representations on DeiT-S and Swin-T architectures. Our findings highlight the promise and limitations of synthetic data in self-supervised learning, offering insights for future work in this direction.
Submission Number: 10
Loading