ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustnessDownload PDF

Published: 21 Dec 2018, Last Modified: 14 Oct 2024ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Convolutional Neural Networks (CNNs) are commonly thought to recognise objects by learning increasingly complex representations of object shapes. Some recent studies suggest a more important role of image textures. We here put these conflicting hypotheses to a quantitative test by evaluating CNNs and human observers on images with a texture-shape cue conflict. We show that ImageNet-trained CNNs are strongly biased towards recognising textures rather than shapes, which is in stark contrast to human behavioural evidence and reveals fundamentally different classification strategies. We then demonstrate that the same standard architecture (ResNet-50) that learns a texture-based representation on ImageNet is able to learn a shape-based representation instead when trained on 'Stylized-ImageNet', a stylized version of ImageNet. This provides a much better fit for human behavioural performance in our well-controlled psychophysical lab setting (nine experiments totalling 48,560 psychophysical trials across 97 observers) and comes with a number of unexpected emergent benefits such as improved object detection performance and previously unseen robustness towards a wide range of image distortions, highlighting advantages of a shape-based representation.
Keywords: deep learning, psychophysics, representation learning, object recognition, robustness, neural networks, data augmentation
TL;DR: ImageNet-trained CNNs are biased towards object texture (instead of shape like humans). Overcoming this major difference between human and machine vision yields improved detection performance and previously unseen robustness to image distortions.
Code: [![github](/images/github_icon.svg) rgeirhos/Stylized-ImageNet](https://github.com/rgeirhos/Stylized-ImageNet) + [![Papers with Code](/images/pwc_icon.svg) 6 community implementations](https://paperswithcode.com/paper/?openreview=Bygh9j09KX)
Data: [Stylized ImageNet](https://paperswithcode.com/dataset/stylized-imagenet), [ImageNet](https://paperswithcode.com/dataset/imagenet), [ImageNet-A](https://paperswithcode.com/dataset/imagenet-a), [ImageNet-C](https://paperswithcode.com/dataset/imagenet-c), [ImageNet-R](https://paperswithcode.com/dataset/imagenet-r), [ImageNet-W](https://paperswithcode.com/dataset/imagenet-w), [VizWiz-Classification](https://paperswithcode.com/dataset/vizwiz-classification)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 9 code implementations](https://www.catalyzex.com/paper/imagenet-trained-cnns-are-biased-towards/code)
19 Replies

Loading