Towards Domain Adversarial Methods to Mitigate Texture BiasDownload PDF

Published: 21 Jul 2022, Last Modified: 05 May 2023SCIS 2022 PosterReaders: Everyone
Keywords: Generalization, Domain Adaptation, Shape Texture Bias
Abstract: Shape-texture conflict is key to our understanding of the behavior of Convolutional Neural Networks (CNNs) and their observably good performance. This work proposes a domain adversarial training-inspired technique as a novel approach to mitigate texture bias. In our work, instead of looking at the domains as the source from which the images are from, we look at the domains as inherent features of the image. The model is trained in a method similar to Domain Adversarial training, where we define the source and target domains as the dataset and its augmented versions with minimal texture information (edge maps and stylized images), respectively. We show that using domain invariant learning to capture a prior based on the shape-texture information helps models learn robust representations. We perform extensive experiments on three subsets of ImageNet, namely, ImageNet-20, ImageNet-200, ImageNet-9. The results show that the proposed method outperforms standard Empirical Risk Minimization (ERM) in terms of test accuracy and also as evidenced by the high accuracy on the Out-Of-Distribution (OOD) datasets ImageNet-R and NICO.
Confirmation: No
0 Replies

Loading