Benchmarking Few-shot Transferability of Pre-trained Models with Improved Evaluation Protocols

15 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Few-shot transfer, Benchmarks, Pretrained models
TL;DR: A unified, rigorous benchmark for evaluating few-shot transferability of pretrained models.
Abstract: Few-shot transfer has been made possible by stronger pre-trained models and improved transfer algorithms. However, there lack of a unified, rigorous evaluation protocol that is challenging yet meets real-world usage. To this end, we carefully review previous evaluation principles and establish new standards with recipes from different aspects following our empirical findings, including the report of confidence intervals, the standard for hyperparameter tuning, and variation of ways and shots, etc. With these standards, we create FewTrans, a few-shot transfer benchmark containing 10 challenging datasets from diverse domains with three sub-benchmarks: one that compares pre-trained models, one that compares transfer algorithms for vision-only models, and one that compares transfer algorithms for multimodal models. To facilitate future research, we reimplement and compare some of the recent pre-trained models and transfer algorithms. We observe that, while stronger pre-trained models bring significant performance improvement, the performance of most transfer methods is quite close, and simply finetuning the whole backbone performs well enough, especially for multi-modal models. We hope that the release of FewTrans benchmark will streamline reproducible and rigorous advances in few-shot transfer learning research.
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 38
Loading