Can Calibration Improve Sample Prioritization?Download PDF

04 Oct 2022, 08:28 (modified: 11 Nov 2022, 17:32)HITY Workshop NeurIPS 2022Readers: Everyone
Keywords: calibration, sample prioritization, subset selection, pre-trained models
TL;DR: In this paper, we show that calibration can not only reduce overconfident predictions, but also can accelerate training by improving the quality of samples selected during subset selection.
Abstract: Calibration can reduce overconfident predictions of deep neural networks, but can calibration also accelerate training? In this paper, we show that it can when used to prioritize some examples for performing subset selection. We study the effect of popular calibration techniques in selecting better subsets of samples during training (also called sample prioritization) and observe that calibration can improve the quality of subsets, reduce the number of examples per epoch (by at least 70%), and can thereby speed up the overall training process. We further study the effect of using calibrated pre-trained models coupled with calibration during training to guide sample prioritization, which again seems to improve the quality of samples selected.
Supplementary Material: zip
3 Replies