Can we Defend Against the Unknown? An Empirical Study About Threshold Selection for Neural Network Monitoring
Keywords: Neural Network Runtime Monitoring, Machine Learning Safety, Threshold Optimization
TL;DR: We compare experimentally different approaches for threshold tuning of neural network runtime monitors.
Abstract: With the increasing use of neural networks in critical systems, runtime monitoring becomes essential to reject unsafe predictions during inference. Various techniques have emerged to establish rejection scores that maximize the separability between the distributions of safe and unsafe predictions. The efficacy of these approaches is mostly evaluated using threshold-agnostic metrics, such as the area under the receiver operating characteristic curve. However, in real-world applications, an effective monitor also requires identifying a good threshold to transform these scores into meaningful binary decisions. Despite the pivotal importance of threshold optimization, this problem has received little attention. A few studies touch upon this question, but they typically assume that the runtime data distribution mirrors the training distribution, which is a strong assumption as monitors are supposed to safeguard a system against potentially unforeseen threats. In this work, we present rigorous experiments on various image datasets to investigate: 1. The effectiveness of monitors in handling unforeseen threats, which are not available during threshold adjustments. 2. Whether integrating generic threats into the threshold optimization scheme can enhance the robustness of monitors.
List Of Authors: Dang, Khoi Tran and Delmas, Kevin and Guiochet, J\'er\'emie and Gu\'erin, Joris
Latex Source Code: zip
Signed License Agreement: pdf
Submission Number: 113
Loading