Calibration and Uncertainty Estimation Challenges in Self-Supervised Chest X-ray Pathology Classification Models

Published: 27 Apr 2024, Last Modified: 01 Jun 2024MIDL 2024 Short PapersEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Uncertainty estimation, self-supervised models, chest X-ray pathology classification, calibration
Abstract: Uncertainty quantification is crucial for the safe deployment of AI systems in clinical radiology. We analyze the calibration of CheXzero (Tiu et al., 2022), a high-performance self-supervised model for chest X-ray pathology detection, on two external datasets and evaluate the effectiveness of two common uncertainty estimation methods: Maximum Softmax Probabilities (MSP) and Monte Carlo Dropout. Our analysis reveals poor calibration on both external datasets, with Expected Calibration Error (ECE) scores ranging from 0.12 to 0.41. Furthermore, we find that the model’s prediction accuracy does not correlate with the uncertainty scores derived from MSP and Monte Carlo Dropout. These findings highlight the need for more robust uncertainty quantification methods to ensure the trustworthiness of AI-assisted clinical decision-making.
Submission Number: 160
Loading