Loss Landscape Matters: Training Certifiably Robust Models with Favorable Loss LandscapeDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Blind SubmissionReaders: Everyone
Keywords: Adversarial Examples, Certifiable Robustness, Certifiable Training, Loss Landscape, Deep Learning, Security
Abstract: In this paper, we study the problem of training certifiably robust models. Certifiable training minimizes an upper bound on the worst-case loss over the allowed perturbation, and thus the tightness of the upper bound is an important factor in building certifiably robust models. However, many studies have shown that Interval Bound Propagation (IBP) training uses much looser bounds but outperforms other models that use tighter bounds. We identify another key factor that influences the performance of certifiable training: \textit{smoothness of the loss landscape}. We consider linear relaxation based methods and find significant differences in the loss landscape across these methods. Based on this analysis, we propose a certifiable training method that utilizes a tighter upper bound and has a landscape with favorable properties. The proposed method achieves performance comparable to state-of-the-art methods under a wide range of perturbations.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: We identify smoothness of the loss landscape as an important factor in building certifiably robust model and propose a method that achieves performance comparable to state-of-the-art certifiable training methods under a wide range of perturbations.
Supplementary Material: zip
Reviewed Version (pdf): https://openreview.net/references/pdf?id=It3BVbq1me
19 Replies

Loading