Abstract: In this paper, we consider the problem of certifying the robustness of neural networks to perturbed and
adversarial input data. Such certification is imperative for
the application of neural networks in safety-critical decisionmaking and control systems. Certification techniques using
convex optimization have been proposed, but they often suffer
from relaxation errors that void the certificate. Our work
exploits the structure of ReLU networks to improve relaxation
errors through a novel partition-based certification procedure.
The proposed method is proven to tighten existing linear
programming relaxations, and asymptotically achieves zero
relaxation error as the partition is made finer. We develop a
finite partition that attains zero relaxation error and use the
result to derive a tractable partitioning scheme that minimizes
the worst-case relaxation error. Experiments using real data
show that the partitioning procedure is able to issue robustness
certificates in cases where prior methods fail. Consequently,
partition-based certification procedures are found to provide
an intuitive, effective, and theoretically justified method for
tightening existing convex relaxation techniques.
0 Replies
Loading