TL;DR: We propose a new Bayesian model selection criterion, the "downstream free energy", which measures a model's adaptability to new tasks. This criterion does not require downstream data and shows promising results in predicting fine-tuning performance.
Abstract: Recent advances in artificial intelligence have been fueled by the development of foundation models such as BERT, GPT, T5, and Vision
Transformers. These models are first pretrained on vast and diverse datasets and then adapted to specific downstream tasks, often with significantly less data. However, the mechanisms behind the success of this ubiquitous pretrain-then-adapt paradigm remain underexplored, particularly the characteristics of pretraining checkpoints that enhance downstream adaptation. We introduce a Bayesian model selection criterion, called the downstream free energy, which quantifies a checkpoint’s adaptability by measuring the concentration of nearby favorable parameters for a downstream task. We demonstrate that this Bayesian model selection criterion can be effectively implemented without access to the downstream data or prior knowledge of the downstream task. Furthermore, we provide empirical evidence that the criterion reliably correlates with improved fine-tuning performance, offering a principled approach to predicting model adaptability.
Lay Summary: Foundation models, the AI systems behind tools like ChatGPT and image generators, are typically trained on massive datasets and then fine-tuned for specific applications. However, selecting the best version (checkpoint) of the initial model to use is a significant challenge, especially without knowledge of or access to future task data. This research introduces "downstream free energy," a Bayesian statistical measure, to predict a checkpoint's adaptability, and proposes using "pretraining free energy" which is calculable using only the initial training data. This approach offers a principled way to choose the best checkpoint, leading to AI models that adapt more effectively to diverse applications when future task details are limited.
Primary Area: Probabilistic Methods->Bayesian Models and Methods
Keywords: transfer learning, Bayesian model selection, efficient fine-tuning, model adaptation
Submission Number: 7719
Loading