Post-Calibration Techniques: Balancing Calibration and Score Distribution Alignment

Published: 10 Oct 2024, Last Modified: 27 Nov 2024NeurIPS BDU Workshop 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: binary scoring classifier, calibration, recalibration, score heterogeneity, divergence, tree-based methods
Abstract: A binary scoring classifier can appear well-calibrated according to standard calibration metrics, even when the distribution of scores does not align with the distribution of the true events. In this paper, we investigate the impact of post-processing calibration on the score distribution (sometimes named "recalibration"). Using simulated data, where the true probability is known, followed by real-world datasets with prior knowledge on event distributions, we compare the performance of an XGBoost model before and after applying calibration techniques. The results show that while applying methods such as Platt scaling or isotonic regression can improve the model's calibration, they may also lead to an increase in the divergence between the score distribution and the underlying event probability distribution.
Submission Number: 50
Loading