everyone
since 13 Oct 2023">EveryoneRevisionsBibTeX
The development of hyperparameter optimization (HPO) algorithms constitutes a key concern within the machine learning domain. While numerous strategies employing early stopping mechanisms have been proposed to bolster HPO efficiency, there remains a notable deficiency in understanding how the selection of early stopping criteria influences the reliability of early stopping decisions and, by extension, the broader outcomes of HPO endeavors. This paper undertakes a systematic exploration of the impact of criterion selection on the effectiveness of early stopping-based HPO. Specifically, we introduce a set of criteria that incorporate uncertainty and highlight their practical significance in enhancing the reliability of early stopping decisions. Through a series of empirical experiments conducted on HPO and NAS benchmarks, we substantiate the critical role of criterion selection, while shedding light on the potential implications of integrating uncertainty as a criterion. This research furnishes empirical insights that serve as a compass for the selection and formulation of criteria, thereby contributing to a more profound comprehension of mechanisms underpinning early stopping-based HPO.