The Adversarial Consistency of Surrogate Risks for Binary Classification

Published: 21 Sept 2023, Last Modified: 23 Dec 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Adversarial learning, surrogate risks, optimal transport
TL;DR: We prove necessary and sufficient conditions for the statistical consistency of surrogate risks in the adversarial setting
Abstract: We study the consistency of surrogate risks for robust binary classification. It is common to learn robust classifiers by adversarial training, which seeks to minimize the expected $0$-$1$ loss when each example can be maliciously corrupted within a small ball. We give a simple and complete characterization of the set of surrogate loss functions that are \emph{consistent}, i.e., that can replace the $0$-$1$ loss without affecting the minimizing sequences of the original adversarial risk, for any data distribution. We also prove a quantitative version of adversarial consistency for the $\rho$-margin loss. Our results reveal that the class of adversarially consistent surrogates is substantially smaller than in the standard setting, where many common surrogates are known to be consistent.
Supplementary Material: pdf
Submission Number: 9094
Loading