Algorithms with Calibrated Machine Learning Predictions

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 spotlightposterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Algorithms with predictions does not assume anything about the quality of ML advice, but calibration guarantees are easy to achieve and enable the design of online algorithms with strong average performance.
Abstract: The field of *algorithms with predictions* incorporates machine learning advice in the design of online algorithms to improve real-world performance. A central consideration is the extent to which predictions can be trusted—while existing approaches often require users to specify an aggregate trust level, modern machine learning models can provide estimates of prediction-level uncertainty. In this paper, we propose *calibration* as a principled and practical tool to bridge this gap, demonstrating the benefits of calibrated advice through two case studies: the *ski rental* and *online job scheduling* problems. For ski rental, we design an algorithm that achieves near-optimal prediction-dependent performance and prove that, in high-variance settings, calibrated advice offers more effective guidance than alternative methods for uncertainty quantification. For job scheduling, we demonstrate that using a calibrated predictor leads to significant performance improvements over existing methods. Evaluations on real-world data validate our theoretical findings, highlighting the practical impact of calibration for algorithms with predictions.
Lay Summary: A line of recent research aims to make smarter decisions in uncertain situations by using machine learning (ML) to forecast the future. Naturally, the reliability of these forecasts plays a key role. Earlier methods typically require the decision maker to quantify their trust in the model's predictions as a whole. However, modern ML models can actually provide much more detailed information, like confidence levels on each individual prediction. One way this is possible is by ensuring that a model's forecasts are *calibrated*, that is, they match the real-world likelihood of outcomes. For example, on days that the model predicts there is a 70% chance of rain, it will actually rain 70% of the time. For two important decision-making scenarios—(1) deciding whether to rent or buy an item and (2) scheduling tasks that need to be completed—we show that access to calibrated forecasts of the future allow the decision-maker to make more reliable choices on average compared to forecasts that are not calibrated.
Link To Code: https://github.com/heyyjudes/algs-cali-pred
Primary Area: Theory
Keywords: calibration, algorithms with predictions, uncertainty quantification, ski rental, job scheduling
Submission Number: 13817
Loading