Understanding Intrinsic Robustness Using Label UncertaintyDownload PDF

Sep 29, 2021 (edited Mar 17, 2022)ICLR 2022 PosterReaders: Everyone
  • Keywords: Concentration of Measure, Intrinsic Adversarial Robustness, Label Uncertainty
  • Abstract: A fundamental question in adversarial machine learning is whether a robust classifier exists for a given task. A line of research has made some progress towards this goal by studying the concentration of measure, but we argue standard concentration fails to fully characterize the intrinsic robustness of a classification problem since it ignores data labels which are essential to any classification task. Building on a novel definition of label uncertainty, we empirically demonstrate that error regions induced by state-of-the-art models tend to have much higher label uncertainty than randomly-selected subsets. This observation motivates us to adapt a concentration estimation algorithm to account for label uncertainty, resulting in more accurate intrinsic robustness measures for benchmark image classification problems.
12 Replies