On the calibration of survival models with competing risks

Published: 03 Feb 2026, Last Modified: 03 Feb 2026AISTATS 2026 SpotlightEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: New calibration metrics for models that can handle competing risks.
Abstract: In survival analysis, accurate probability estimates are essential for decision-making, particularly in the competing-risks setting where multiple events are considered. Recent work has focused on the calibration of these probabilities in the survival analysis setting. Yet calibration in the competing-risks setting is both under-explored and harder, because it applies to both probabilities across classes and across time. We show that existing calibration measures are not suited to the competing-risk setting and that recent models do not give well-behaved probabilities. Competing risks need a dedicated calibration framework. For this, we introduce two well-behaved calibration measures, and related methods to estimate, test, and correct -recalibration. We show that these calibration scores lead to a principled statistical framework: they are minimized for oracle estimators (i.e., both measures are proper); they reveal calibration errors in modern models, corrected by our recalibration methods that yield good probabilities while preserving discrimination.
Submission Number: 782
Loading