How does a Neural Network's Architecture Impact its Robustness to Noisy Labels?Download PDF

May 21, 2021 (edited Nov 02, 2021)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: noisy labels, architectural inductive bias, algorithmic alignment, graph neural networks
  • TL;DR: We provide a formal framework connecting the robustness of a network to noisy labels with the alignments between its architecture and target/noise functions.
  • Abstract: Noisy labels are inevitable in large real-world datasets. In this work, we explore an area understudied by previous works --- how the network's architecture impacts its robustness to noisy labels. We provide a formal framework connecting the robustness of a network to the alignments between its architecture and target/noise functions. Our framework measures a network's robustness via the predictive power in its representations --- the test performance of a linear model trained on the learned representations using a small set of clean labels. We hypothesize that a network is more robust to noisy labels if its architecture is more aligned with the target function than the noise. To support our hypothesis, we provide both theoretical and empirical evidence across various neural network architectures and different domains. We also find that when the network is well-aligned with the target function, its predictive power in representations could improve upon state-of-the-art (SOTA) noisy-label-training methods in terms of test accuracy and even outperform sophisticated methods that use clean labels.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code: https://github.com/jinglingli/alignment_noisy_label
13 Replies

Loading