Robust and Generalizable Visual Representation Learning via Random ConvolutionsDownload PDF

Published: 12 Jan 2021, Last Modified: 03 Apr 2024ICLR 2021 PosterReaders: Everyone
Keywords: domain generalization, robustness, representation learning, data augmentation
Abstract: While successful for various computer vision tasks, deep neural networks have shown to be vulnerable to texture style shifts and small perturbations to which humans are robust. In this work, we show that the robustness of neural networks can be greatly improved through the use of random convolutions as data augmentation. Random convolutions are approximately shape-preserving and may distort local textures. Intuitively, randomized convolutions create an infinite number of new domains with similar global shapes but random local texture. Therefore, we explore using outputs of multi-scale random convolutions as new images or mixing them with the original images during training. When applying a network trained with our approach to unseen domains, our method consistently improves the performance on domain generalization benchmarks and is scalable to ImageNet. In particular, in the challenging scenario of generalizing to the sketch domain in PACS and to ImageNet-Sketch, our method outperforms state-of-art methods by a large margin. More interestingly, our method can benefit downstream tasks by providing a more robust pretrained visual representation.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: We use random convolutions as data augmentation to train robust visual representation that generalize to new domains.
Data: [ImageNet-Sketch](https://paperswithcode.com/dataset/imagenet-sketch), [MNIST](https://paperswithcode.com/dataset/mnist), [MNIST-C](https://paperswithcode.com/dataset/mnist-c), [MNIST-M](https://paperswithcode.com/dataset/mnist-m), [PACS](https://paperswithcode.com/dataset/pacs), [SVHN](https://paperswithcode.com/dataset/svhn)
Code: [![Papers with Code](/images/pwc_icon.svg) 2 community implementations](https://paperswithcode.com/paper/?openreview=BVSM0x3EDK6)
17 Replies

Loading