TL;DR: We introduce Calibration-aware Semantic Mixing (CSM), a data augmentation framework that uses diffusion models to generate training samples with ground-truth confidence, improving model calibration.
Abstract: Model calibration seeks to ensure that models produce confidence scores that accurately reflect the true likelihood of their predictions being correct. However, existing calibration approaches are fundamentally tied to datasets of one-hot labels implicitly assuming full certainty in all the annotations. Such datasets are effective for classification but provides insufficient knowledge of uncertainty for model calibration, necessitating the curation of datasets with numerically rich ground-truth confidence values. However, due to the scarcity of uncertain visual examples, such samples are not easily available as real datasets. In this paper, we introduce calibration-aware data augmentation to create synthetic datasets of diverse samples and their ground-truth uncertainty. Specifically, we present **Calibration-aware Semantic Mixing (CSM)**, a novel framework that generates training samples with mixed class characteristics and annotates them with distinct confidence scores via diffusion models. Based on this framework, we propose calibrated reannotation to tackle the misalignment between the annotated confidence score and the mixing ratio during the diffusion reverse process. Besides, we explore the loss functions that better fit the new data representation paradigm. Experimental results demonstrate that CSM achieves superior calibration compared to the state-of-the-art calibration approaches. Our code is [available here](https://github.com/E-Galois/CSM).
Lay Summary: Machine-learning models often give “confidence scores” alongside their predictions, but current methods for teaching models to be well-calibrated assume every training label is 100% certain, which isn’t realistic, especially when some images are genuinely ambiguous.
We introduce Calibration-aware Semantic Mixing (CSM), a way to synthetically blend visual samples using diffusion models so that each mixed image comes with a precisely known “ground-truth” confidence value. We also develop a “calibrated reannotation” step to correct those scores after generation and adapt our balanced training loss to this richer form of data.
By training on these varied, uncertainty-aware examples, models become significantly better at matching their reported confidence to actual accuracy, leading to more trustworthy predictions in real-world applications.
Link To Code: https://github.com/E-Galois/CSM
Primary Area: Deep Learning->Robustness
Keywords: model calibration, semantic mixing, diffusion models
Submission Number: 6069
Loading