An Empirical Study of Pre-trained Model Selection for Out-of-Distribution Generalization and Calibration

TMLR Paper3628 Authors

05 Nov 2024 (modified: 07 Nov 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: In out-of-distribution (OOD) generalization tasks, fine-tuning pre-trained models has become a prevalent strategy. Different from most prior work that has focused on advancing learning algorithms, we systematically examined how pre-trained model size, pre-training dataset size, and training strategies impact generalization and uncertainty calibration on downstream tasks. We evaluated 100 models across diverse pre-trained model sizes, five pre-training datasets, and five data augmentations through extensive experiments on four distribution shift datasets totaling over 120,000 GPU hours. Our results demonstrate the significant impact of pre-trained model selection, with optimal choices substantially improving OOD accuracy over algorithm improvement alone. Additionally, we find that larger models and bigger pre-training datasets not only enhance OOD performance but also improve calibration, helping to mitigate overconfidence, contrary to some prior studies that found modern deep networks to calibrate worse than classical shallow models. Our work underscores the overlooked importance of pre-trained model selection for out-of-distribution generalization and calibration.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Mingsheng_Long2
Submission Number: 3628
Loading