Investigating CNNs' Learning Representation under label noiseDownload PDF

27 Sept 2018 (modified: 03 Apr 2024)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Deep convolutional neural networks (CNNs) are known to be robust against label noise on extensive datasets. However, at the same time, CNNs are capable of memorizing all labels even if they are random, which means they can memorize corrupted labels. Are CNNs robust or fragile to label noise? Much of researches focusing on such memorization uses class-independent label noise to simulate label corruption, but this setting is simple and unrealistic. In this paper, we investigate the behavior of CNNs under class-dependently simulated label noise, which is generated based on the conceptual distance between classes of a large dataset (i.e., ImageNet-1k). Contrary to previous knowledge, we reveal CNNs are more robust to such class-dependent label noise than class-independent label noise. We also demonstrate the networks under class-dependent noise situations learn similar representation to the no noise situation, compared to class-independent noise situations.
Keywords: learning with noisy labels, deep learning, convolutional neural networks
TL;DR: Are CNNs robust or fragile to label noise? Practically, robust.
Data: [ImageNet](https://paperswithcode.com/dataset/imagenet), [ImageNet-1K](https://paperswithcode.com/dataset/imagenet-1k-1)
7 Replies

Loading