Abstract: Neural networks are increasingly applied to support decision-making in safety-critical applications (like autonomous cars, unmanned aerial vehicles, and face recognition-based authentication). While many impressive static verification techniques have been proposed to tackle the correctness problem of neural networks, existing static verification techniques still do not answer the natural question: what is the subsequent measure that one should take if the DNN is not verified? In this work, we propose a runtime repairing method to ensure the correctness of neural networks within certain input regions. Given a neural network and a safety property, we first adopt state-of-the-art static verification techniques to verify the neural networks. In the case that the verification fails, we strategically identify locations to introduce additional gates which “correct” neural network behaviors at runtime whilst keeping the modifications small. Experiment results show that our approach effectively generates neural networks which are guaranteed to satisfy the properties, whilst being consistent with the original neural network most of the time.
External IDs:dblp:conf/qrs/DongSWWD21
Loading