Effective Certification of Monotone Deep Equilibrium ModelsDownload PDF

29 Sept 2021 (modified: 22 Oct 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Deep Equilibrium Models, Certified Robustness, Convex Relaxations
Abstract: Monotone Operator Equilibrium Models (monDEQs) represent a class of models that combine the powerful deep equilibrium paradigm with convergence guarantees. As monDEQs are inherently robust to adversarial perturbations, investigating new methods to certify their robustness is a promising research direction. Unfortunately, existing certification approaches are either imprecise or severely limited in their scalability. In this work, we propose the first scalable \emph{and} precise monDEQ verifier, based on two key ideas: (i) a novel convex relaxation which enables efficient inclusion checks, and (ii) non-trivial mathematical insights characterizing the fixpoint operations at the heart of monDEQs on sets rather than concrete inputs. An extensive evaluation of our verifier demonstrates that on the challenging $\ell_\infty$ perturbations it exceeds state-of-the-art performance in terms of speed (two orders of magnitude) and scalability (an order of magnitude) while yielding 25\% higher certified accuracies on the same networks.
One-sentence Summary: We obtain state-of-the-art results in robustness verification of monDEQs by proposing a novel convex relaxation which admits efficient fixed point computation.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2110.08260/code)
10 Replies

Loading