- Keywords: generative models, self-supervised learning, data augmentation, anomaly detection
- Abstract: Data augmentation is often used to enlarge datasets with synthetic samples generated in accordance with the underlying data distribution. To enable a wider range of augmentations, we explore negative data augmentation strategies (NDA) that intentionally create out-of-distribution samples. We show that such negative out-of-distribution samples provide information on the support of the data distribution, and can be leveraged for generative modeling and representation learning. We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator. We prove that under suitable conditions, optimizing the resulting objective still recovers the true data distribution but can directly bias the generator towards avoiding samples that lack the desired structure. Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities. Further, we incorporate the same negative data augmentation strategy in a contrastive learning framework for self-supervised representation learning on images and videos, achieving improved performance on downstream image classification, object detection, and action recognition tasks. These results suggest that prior knowledge on what does not constitute valid data is an effective form of weak supervision across a range of unsupervised learning tasks.
- One-sentence Summary: We propose a framework to do Negative Data Augmentation for generative models and self-supervised learning
- Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
- Code: [![github](/images/github_icon.svg) ermongroup/NDA](https://github.com/ermongroup/NDA) + [![Papers with Code](/images/pwc_icon.svg) 1 community implementation](https://paperswithcode.com/paper/?openreview=Ovp8dvB8IBH)
- Data: [CIFAR-10](https://paperswithcode.com/dataset/cifar-10), [CIFAR-100](https://paperswithcode.com/dataset/cifar-100), [Cityscapes](https://paperswithcode.com/dataset/cityscapes), [DTD](https://paperswithcode.com/dataset/dtd), [ImageNet](https://paperswithcode.com/dataset/imagenet), [Places](https://paperswithcode.com/dataset/places), [SVHN](https://paperswithcode.com/dataset/svhn)