Cooperation or Competition: Avoiding Player Domination for Multi-target Robustness by Adaptive BudgetsDownload PDF

Published: 21 Nov 2022, Last Modified: 05 May 2023TSRML2022Readers: Everyone
Keywords: Multi-target Adversarial Training, Bargaining Game
TL;DR: For multi-target adversarial training, we identify a phenomenon named player domination which leads to the non-convergence of previous algorithms and further design a novel adaptive budget method to achieve better robustness.
Abstract: Despite incredible advances, deep learning has been shown to be susceptible to adversarial attacks. Numerous approaches were proposed to train robust networks both empirically and certifiably. However, most of them defend against only a single type of attack, while recent work steps forward at defending against multiple attacks. In this paper, to understand multi-target robustness, we view this problem as a bargaining game in which different players (adversaries) negotiate to reach an agreement on a joint direction of parameter updating. We identify a phenomenon named \emph{player domination} in the bargaining game, and show that with this phenomenon, some of the existing max-based approaches such as MAX and MSD do not converge. Based on our theoretical results, we design a novel framework that adjusts the budgets of different adversaries to avoid player domination. Experiments on two benchmarks show that employing the proposed framework to the existing approaches significantly advances multi-target robustness.
3 Replies