CF-HPO: Counterfactual Explanations for Hyperparameter Optimization

TMLR Paper6846 Authors

06 Jan 2026 (modified: 28 Jan 2026)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Hyperparameter optimization (HPO) is a fundamental component of studies that use tech- technologies such as machine learning and deep learning. Regardless of the field, almost every study requires hyperparameter optimization at some level. In general, applying HPO to a developed system improves its performance by optimizing multiple parameters. However, extant HPO methods do not provide information on why specific configurations are successful, what should not be done, or what could be improved. The present study proposes a novel approach to address this gap in the literature by introducing CF-HPO, a modular framework that generates counterfactual explanations for HPO results. CF-HPO answers questions such as “what potential improvements could be made,” “what settings should be avoided,” and “what-if analysis.” These outputs can serve as a guide, especially for those who are not optimization experts. The recommended system has a modular design that supports different search strategies (UCB-driven, random, restart). This allows it to perform well in optimization and also to show counterfactual explanations at the end of optimization. Experiments conducted on the YAHPO benchmark package yielded validation rates of 92.2% for neural networks and 60.4% for random forests. These findings reveal that counterfactual generability depends on the geometry of the performance surface rather than dimensionality.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Yutian_Chen1
Submission Number: 6846
Loading