Provable Editing of Deep Neural Networks using Parametric Linear Relaxation

Published: 25 Sept 2024, Last Modified: 14 Jan 2025NeurIPS 2024 posterEveryoneRevisionsBibTeXCC BY-NC-ND 4.0
Keywords: Provable editing, provable repair, provable training, trustworthiness, linear programming, local robustness, verification
TL;DR: Efficient technique for provably editing the DNN parameters such that the DNN satisfies a given property for all inputs in a given polytope.
Abstract: Ensuring that a DNN satisfies a desired property is critical when deploying DNNs in safety-critical applications. There are efficient methods that can verify whether a DNN satisfies a property, as seen in the annual DNN verification competition (VNN-COMP). However, the problem of provably editing a DNN to satisfy a property remains challenging. We present PREPARED, the first efficient technique for provable editing of DNNs. Given a DNN $\mathcal{N}$ with parameters $\theta$, input polytope $P$, and output polytope $Q$, PREPARED finds new parameters $\theta'$ such that $\forall \mathrm{x} \in P . \mathcal{N}(\mathrm{x}; \theta') \in Q$ while minimizing the changes $\lVert{\theta' - \theta}\rVert$. Given a DNN and a property it violates from the VNN-COMP benchmarks, PREPARED is able to provably edit the DNN to satisfy this property within 45 seconds. PREPARED is efficient because it relaxes the NP-hard provable editing problem to solving a linear program. The key contribution is the novel notion of Parametric Linear Relaxation, which enables PREPARED to construct tight output bounds of the DNN that are parameterized by the new parameters $\theta'$. We demonstrate that PREPARED is more efficient and effective compared to prior DNN editing approaches i) using the VNN-COMP benchmarks, ii) by editing CIFAR10 and TinyImageNet image-recognition DNNs, and BERT sentiment-classification DNNs for local robustness, and iii) by training a DNN to model a geodynamics process and satisfy physics constraints.
Primary Area: Safety in machine learning
Submission Number: 12706
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview