Keywords: calibration, uncertainty quantification, sequential prediction, decision making
TL;DR: We connect calibration in regression and classification with the notion of parity (whether the next observation increases or decreases w.r.t. the current observation), and propose methods to produce parity calibrated predictions.
Abstract: In a sequential regression setting, a decision-maker may be primarily concerned with whether the future observation will increase or decrease compared to the current one, rather than the actual value of the future observation. In this context, we introduce the notion of parity calibration, which captures the goal of calibrated forecasting for the increase-decrease (or ``parity") event in a timeseries. Parity probabilities can be extracted from a forecasted distribution for the output, but we show that such a strategy leads to theoretical unpredictability and poor practical performance. We then observe that although the original task was regression, parity calibration can be expressed as binary calibration. Drawing on this connection, we use an online binary calibration method to achieve parity calibration. We demonstrate the effectiveness of our approach on real-world case studies in epidemiology, weather forecasting, and model-based control in nuclear fusion.
Supplementary Material: pdf
Other Supplementary Material: zip