The Fundamental Limits of Neural Networks for Interval Certified RobustnessDownload PDF

02 Jun 2022, 07:41 (modified: 15 Aug 2022, 17:12)Accepted by TMLRReaders: Everyone
Abstract: Interval analysis (or interval bound propagation, IBP) is a popular technique for verifying and training provably robust deep neural networks, a fundamental challenge in the area of reliable machine learning. However, despite substantial efforts, progress on addressing this key challenge has stagnated, calling into question whether interval analysis is a viable path forward. In this paper we present a fundamental result on the limitation of neural networks for interval analyzable robust classification. Our main theorem shows that non-invertible functions can not be built such that interval analysis is precise everywhere. Given this, we derive a paradox: while every dataset can be robustly classified, there are simple datasets that can not be provably robustly classified with interval analysis.
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: 1. Deanonymized version of paper released. 2. All suggestions from f5Ug have been addressed in prior revisions.
Assigned Action Editor: ~Kuldeep_S._Meel2
22 Replies