A Spectral Perspective of Neural Networks Robustness to Label NoiseDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Label noise, Neural network robustness, Regularization methods, Spectral normalization, Fourier analysis
Abstract: Deep networks usually require a massive amount of labeled data for their training. Yet, such data may include some mistakes in the labels. Interestingly, networks have been shown to be robust to such errors. This work uses a spectral (Fourier) analysis of their learned mapping to provide an explanation for their robustness. In particular, we relate the smoothness regularization that usually exists in conventional training to attenuation of high frequencies, which mainly characterize noise. By using a connection between the smoothness and the spectral norm of the network weights, we suggest that one may further improve robustness via spectral normalization. Empirical experiments validate our claims and show the advantage of this normalization for classification with label noise.
Supplementary Material: zip
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: We show that spectral normalization attenuates high frequencies in the learned mapping of neural networks, which improve their robustness to label noise.
Reviewed Version (pdf): https://openreview.net/references/pdf?id=EDQQnYO8Lg
9 Replies

Loading