Abstract: Tuning hyperparameters effectively is crucial for improving the performance of machine learning models. However, hyperparameter optimization (HPO) often demands significant computational budget, which is typically limited. Therefore, efficiently using this constrained budget is critical in HPO. Multi-fidelity HPO has emerged as a potential solution to this issue. This paper presents a comprehensive review of multi-fidelity HPO in machine learning, discusses recent algorithms for HPO, and proposes directions for future research.
External IDs:dblp:journals/ict-express/WonLL25
Loading