On the Convergence of Certified Robust Training with Interval Bound Propagation

29 Sept 2021, 00:35 (modified: 16 Mar 2022, 06:26)ICLR 2022 PosterReaders: Everyone
Keywords: Certified robustness, Adversarial robustness, Convergence
Abstract: Interval Bound Propagation (IBP) is so far the base of state-of-the-art methods for training neural networks with certifiable robustness guarantees when potential adversarial perturbations present, while the convergence of IBP training remains unknown in existing literature. In this paper, we present a theoretical analysis on the convergence of IBP training. With an overparameterized assumption, we analyze the convergence of IBP robust training. We show that when using IBP training to train a randomly initialized two-layer ReLU neural network with logistic loss, gradient descent can linearly converge to zero robust training error with a high probability if we have sufficiently small perturbation radius and large network width.
One-sentence Summary: We present the first theoretical analysis on the convergence of certified robust training with interval bound propagation.
13 Replies

Loading