KITE: A Kernel-based Improved Transferability Estimation MethodDownload PDF

22 Sept 2022 (modified: 14 Oct 2024)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Transfer Learning, Transferability Estimation
TL;DR: we propose a novel transferability estimation method, called KITE, for selecting the most effective pre-trained model for fine-tuning on a target dataset
Abstract: Transferability estimation has emerged as an important problem in transfer learning. A transferability estimation method takes as inputs a set of pre-trained models and decides which pre-trained model can deliver the best transfer learning performance. Existing methods tackle this problem by analyzing the output of the pre-trained model or by comparing the pre-trained model with a probe model trained on the target dataset. However, neither is sufficient to provide reliable and efficient transferability estimations. In this paper, we present a novel perspective and introduce \textsc{Kite}, as a \underline{K}ernel-based \underline{I}mproved \underline{T}ransferability \underline{E}stimation method. \textsc{Kite} is based on the key observations that the separability of the pre-trained features and the similarity of the pre-trained features to random features are two important factors for estimating transferability. Inspired by kernel methods, \textsc{Kite} adopts \emph{centered kernel alignment} as an effective way to assess feature separability and feature similarity. \textsc{Kite} is easy to interpret, fast to compute, and robust to the target dataset size. We evaluate the performance of \textsc{Kite} on a recently introduced large-scale model selection benchmark. The benchmark contains 8 source dataset, 6 target datasets and 4 architectures with a total of 32 pre-trained models. Extensive results show that \textsc{Kite} outperforms existing methods by a large margin for transferability estimation.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/kite-a-kernel-based-improved-transferability/code)
6 Replies

Loading