LAMDA: Two-Phase HPO via Learning Prior from Low-Fidelity Data

Published: 20 Jan 2026, Last Modified: 25 Jan 2026AAAI 2026EveryoneCC BY-NC-SA 4.0
Abstract: Hyperparameter Optimization (HPO) is crucial in machine learning, aiming to optimize hyperparameters to enhance model performance. Although existing methods that leverage prior knowledge—drawn from either previous experiments or expert insights—can accelerate optimization, acquiring a correct prior for a specific HPO task is non-trivial. In this work, we propose to relief the reliance on external knowledge by learning a reliable prior \emph{directly} from low-fidelity (LF) problems. We introduce , an algorithm-agnostic framework designed to boost any baseline HPO algorithm. Specifically, operates in two phases: (1) it learns a reliable prior by exploring the LF landscape under limited computational budgets, and (2) it leverages this learned prior to guide the HPO process. We showcase how the framework can be integrated with various HPO algorithms to boost their performance, and further conduct theoretical analysis towards the integrated Bayesian optimization and bandit-based Hyperband. We conduct experiments on HPO problems spanning diverse domains and model scales. Results show that \texttt{Lamda} consistently enhances its baseline algorithms. Compared to nine state-of-the-art HPO algorithms, our variant achieves the best performance in out of HPO tasks while it is the second best algorithm in the other cases.
Loading