A review on multi-fidelity hyperparameter optimization in machine learning

Published: 01 Jan 2025, Last Modified: 27 Sept 2025ICT Express 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Tuning hyperparameters effectively is crucial for improving the performance of machine learning models. However, hyperparameter optimization (HPO) often demands significant computational budget, which is typically limited. Therefore, efficiently using this constrained budget is critical in HPO. Multi-fidelity HPO has emerged as a potential solution to this issue. This paper presents a comprehensive review of multi-fidelity HPO in machine learning, discusses recent algorithms for HPO, and proposes directions for future research.
Loading