Decision Theoretic Foundations for Conformal Prediction: Optimal Uncertainty Quantification for Risk-Averse Agents

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 spotlightposterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: A fundamental question in data-driven decision making is how to quantify the uncertainty of predictions to inform risk-sensitive downstream actions, as often required in domains such as medicine. We develop a decision-theoretic foundation linking prediction sets to risk-averse decision-making, addressing three questions: (1) What is the correct notion of uncertainty quantification for risk-averse decision makers? We prove that prediction sets are optimal for decision makers who wish to optimize their value at risk. (2) What is the optimal policy that a risk averse decision maker should use to map prediction sets to actions? We show that a simple max-min decision policy is optimal for risk-averse decision makers. Finally, (3) How can we derive prediction sets that are optimal for such decision makers? We provide an exact characterization in the population regime and a distribution free finite-sample construction. These insights leads to *Risk-Averse Calibration (RAC)*, a principled algorithm that is both *practical*—exploiting black-box predictions to enhance downstream utility—and *safe*—adhering to user-defined risk thresholds. We experimentally demonstrate RAC's advantages in medical diagnosis and recommendation systems, showing that it substantially improves the trade-off between safety and utility, delivering higher utility than existing methods while avoiding critical errors.
Lay Summary: Machine learning models are increasingly used to predict outcomes and guide decisions, especially in areas like medicine, where making the wrong choice can be costly or dangerous. Our research tackles this problem by linking uncertainty in predictions directly to safer and better decisions. We developed a new approach that helps decision-makers manage uncertainty effectively by providing clear guidance on what action to take when predictions are uncertain. First, we clarified what uncertainty really means for someone who wants to avoid the worst possible outcomes. Then, we identified a straightforward strategy that always chooses the safest option based on these uncertain predictions. Finally, we created a practical tool called Risk-Averse Calibration (RAC) that takes predictions from any existing model and determines the safest actions. RAC is designed to ensure that actions stay within an acceptable risk limit while maximizing the benefit gained. Through experiments in medical diagnosis and recommendation systems, we found that RAC significantly outperforms existing methods, consistently delivering higher benefits without compromising safety.
Link To Code: https://github.com/shayankiyani98/Risk-Averse-Calibration
Primary Area: Theory->Probabilistic Methods
Keywords: Decision making, Uncertainty Quantification, Prediction sets, Conformal prediction, Calibration, Risk Averse, Risk Aware, Risk sensitive, distribution free
Submission Number: 13862
Loading