Abstract: In mission-critical domains such as sensor networks, operators often face the critical decision of whether to act on incomplete information or if collecting missing values is likely to change the prediction. Existing methods typically focus on imputing missing values or quantifying model uncertainty, but they do not directly assess the stability of a prediction if missing values were to be revealed. To address this gap, we introduce a framework for Missing Value Uncertainty (MVU), which is the distribution of predictions induced by incomplete inputs at inference time. We formalize the problem by defining hard confidence: the probability that a prediction will not change after collecting the missing data. We propose a novel Direct Missing Value (DMV) to efficiently estimate the MVU distribution, bypassing the need for expensive Monte Carlo sampling or retraining the model. Second, we introduce the Missing Value Calibration Error (MVCE), a new metric specifically designed to evaluate the calibration of hard confidence values, and a post-hoc calibration procedure to improve MVU estimation. We showcase our method and metric on synthetic and real-world datasets.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Michele_Caprio1
Submission Number: 7767
Loading