A Replication Study of Transfer Learning with Informative Priors: Simple Baselines Better than Previously Reported

TMLR Paper2214 Authors

16 Feb 2024 (modified: 17 Apr 2024)Under review for TMLREveryoneRevisionsBibTeX
Abstract: We pursue transfer learning to improve classifier accuracy on a target task with few labeled examples available for training. Recent work suggests that using a source task to learn a prior distribution over neural net weights, not just an initialization, can boost target task performance. We perform a replication study with careful hyperparameter tuning of all methods on every dataset. We find that standard transfer learning informed by an initialization only performs far better than reported in previous comparisons. The relative gains of methods using informative priors over standard transfer learning vary in magnitude across 5 total datasets. For the scenario of 5-300 examples per class, we find negative or neglible gains on 2 datasets, modest gains (between 1.5-3 points of accuracy) on 2 other datasets, and substantial gains (>8 points) on one dataset. Among methods using informative priors, we find that an isotropic covariance appears competitive with learned low-rank covariance matrix while being substantially simpler to understand and tune. Further analysis suggests that the mechanistic justification for informed priors -- hypothesized improved alignment between train and test loss landscapes -- is not consistently supported due to high variability in empirical landscapes. We release code to allow independent reproduction of all experiments.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Vincent_Fortuin1
Submission Number: 2214
Loading