Are CNNs biased towards texture rather than object shape?

Anonymous

17 Jan 2022 (modified: 05 May 2023)Submitted to BT@ICLR2022Readers: Everyone
Keywords: CNNs, Adversarial examples, Robustness, Explainability
Abstract: Although we are seeing so many exciting research papers with advancements in CNN architectures and their application domains, we still have little to no understanding as to why these systems decide as they do. That’s why we consider these systems ‘black box', we don’t know the ‘reasoning’ behind a particular decision. And such behaviors cannot be overlooked just because they are scoring high on predefined metrics. For example, the Gender Shapes project shows that various face recognition systems perform worse on minority classes(accuracy difference of up to 34% between lighter-skinned males and darker-skinned females). Now if such systems are used for law enforcement, airport, or employment screenings, this bias can have major repercussions. This highlights the importance of ‘explainability’ in computer vision systems. ‘Adversarial attacks’ demonstrates one such counter-intuitive behavior of CNNs. These examples are specially devised to fool the CNNs into predicting wrong labels, just by altering the image by a noise indistinguishable to the human eye. One such behavior is captured by the paper ‘ImageNet-trained CNNs are biased towards texture’. Let’s dive in…
Submission Full: zip
Blogpost Url: yml
ICLR Paper: https://openreview.net/forum?id=Bygh9j09KX
2 Replies

Loading