Learning Visual Representations for Transfer Learning by Suppressing TextureDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Blind SubmissionReaders: Everyone
Keywords: Suppressing Texture, Transfer learning, Self-Supervised Learning
Abstract: Recent works have shown that features obtained from supervised training of CNNs may over-emphasize texture rather than encoding high-level information. In self-supervised learning, in particular, texture as a low-level cue may provide shortcuts that prevent the network from learning higher-level representations. To address these problems we propose to use classic methods based on anisotropic diffusion to augment training using images with suppressed texture. This simple method helps retain important edge information and suppress texture at the same time. We report our observations for fully supervised and self-supervised learning tasks like MoCoV2 and Jigsaw and achieve state-of-the-art results on object detection and image classification with eight diverse datasets. Our method is particularly effective for transfer learning tasks and we observed improved performance on five standard transfer learning datasets. The large improvements on the Sketch-ImageNet dataset, DTD dataset and additional visual analyses of saliency maps suggest that our approach helps in learning better representations that transfer well.
One-sentence Summary: Suppressing texture leads to better transfer learning performance.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Reviewed Version (pdf): https://openreview.net/references/pdf?id=HlVRFHa_gR
11 Replies

Loading