Keywords: HPO, multi-fidelity, overlapping, promising regions
Abstract: Multi-fidelity hyperparameter optimization (HPO) combines data from both high-fidelity (HF) and low-fidelity (LF) problems during the optimization process, aiding in effective sampling and preliminary screening. To enhance its performance, approaches that incorporate expert knowledge or transfer ability into the HPO algorithm have demonstrated their superiority, while such domain knowledge or abundant data from multiple similar tasks may not always be accessible. Observing that high-quality solutions in HPO exhibit some overlap between high- and low-fidelity problems, we propose a two-phase framework $\texttt{Lamda}$ to streamline the multi-fidelity HPO. Specifically, in the first phase, it searches in the LF landscape to identify the promising regions of LF problem. In the second phase, we leverage such promising regions to construct reliable priors to navigate the HPO. We showcase how the $\texttt{Lamda}$ framework can be integrated with various HPO algorithms to boost their performance, and further conduct theoretical analysis towards the integrated Bayesian optimization and bandit-based Hyperband. We demonstrate the effectiveness of our framework across $56$ HPO tasks.
Primary Area: optimization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7012
Loading