Degradation Attacks on Certifiably Robust Neural Networks

Published: 30 Nov 2022, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Certifiably robust neural networks protect against adversarial examples by employing run-time defenses that check if the model is certifiably locally robust at the input under evaluation. We show through examples and experiments that any defense (whether complete or incomplete) based on checking local robustness is inherently over-cautious. Specifically, such defenses flag inputs for which local robustness checks fail, but yet that are not adversarial; i.e., they are classified consistently with all valid inputs within a distance of $\epsilon$. As a result, while a norm-bounded adversary cannot change the classification of an input, it can use norm-bounded changes to degrade the utility of certifiably robust networks by forcing them to reject otherwise correctly classifiable inputs. We empirically demonstrate the efficacy of such attacks against state-of-the-art certifiable defenses. Our code is available at https://github.com/ravimangal/degradation-attacks.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/ravimangal/degradation-attacks
Assigned Action Editor: ~Nicolas_Papernot1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 251
Loading