Enhancing the Performance of Bandit-based Hyperparameter Optimization

Published: 01 Jan 2024, Last Modified: 13 Feb 2025ICDE 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Bandit-based methods are commonly used for hyperparameter optimization (HPO), which is significant in data analytics. When confronted with numerous configurations and high-dimensional large problems, existing bandit-based methods face challenges of high evaluation cost and poor optimization performance. To address these challenges, we introduce an improved bandit-based approach that exhibits enhanced evaluation ability and is suitable for situations with limited resources. Specifically, our method first effectively utilizes the feature and label information to conduct representative groups for further evaluation. After that, two kinds of folds (i.e., general folds and special folds) are constructed to facilitate better evaluation of the configuration in the cross-validation process. Additionally, we incorporate variance and subset size into the evaluation metric to comprehensively evaluate the configuration. We integrate our proposed method into three commonly used bandit-based methods, and experimental results on multiple datasets show that our method has advantages in stability with accuracy improvement of 1 % to 15 % on the datasets tested. In addition, since our method can avoid configurations that are low-quality but time-consuming to evaluate, it is always more efficient than the existing bandit-based methods, and can even reduce the execution time by half in some datasets. Sometimes it takes a little more time, but the improvement in accuracy can be significant.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview