Independently-prepared Query-efficient Model Selection

15 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: model selection, machine learning, transfer learning
TL;DR: We propose a new model selection paradigm with both scalable preparations and scalable queries.
Abstract: The advancement of deep learning technologies is bringing new models by the day, which not only facilitates the importance of model selection but also makes it more challenging than ever. However, existing solutions for model selection either require a large amount of model operations proportional to the number of candidates when selecting models for each task, or require group preparations that jointly optimize the embedding vectors of many candidate models. As a result, the scalability of existing solutions is limited with the increasing amounts of candidates. In this work, we present a new paradigm for model selection, namely independently-prepared query-efficient model selection. The advantage of our paradigm is twofold: first, it is query-efficient, meaning that it requires only a constant amount of model operations every time it selects models for a new task; second, it is independently-prepared, meaning that any information about a candidate model that is necessary for the selection can be prepared independently requiring no interaction with others. Consequently, the new paradigm offers by definition many desirable properties for applications: updatability, decentralizability, flexibility, and certain preservation of both candidate privacy and query privacy. With the benefits uncovered, we present Standardized Embedder as a proof-of-concept solution to support the practicality of the proposed paradigm. We empirically evaluate this solution by selecting models for multiple downstream tasks, from a pool of 100 pre-trained models that cover different model architectures and various training recipes, highlighting the potential of the proposed paradigm.
Supplementary Material: zip
Primary Area: general machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 252
Loading