Keywords: Hyperparameter Optimization, Hyperparameter Importance, Greedy Importance First, Bayesian Optimization, AutoML, Deep Learning
TL;DR: We propose Greedy Importance First (GIF), an importance-aware scheduling strategy that improves sample-efficient hyperparameter optimization in high dimensions space.
Abstract: Hyperparameter Optimization (HPO) is essential for building high-performing ML/DL models, yet conventional optimizers often struggle in high-dimensional spaces where evaluations are costly and progress is diluted across many low-impact variables. We propose Greedy Importance First (GIF), an importance-aware scheduling strategy that uses a small-sample warm start to estimate per-hyperparameter importance, forms importance-driven groups, allocates budget proportionally, and retains a full-space fallback. Under fixed evaluation budgets, we study GIF on diverse benchmarks—five asymmetric high-dimensional analytic functions ($d\in\{5,10,30,50\}$), Bayesmark, and NAS-Bench-301 (33D). GIF consistently attains faster convergence and stronger final incumbents than baselines (TPE, BOHB, Random Search, and Sequential Grouping) in higher-dimensional settings; on Bayesmark, where the effective dimensionality is smaller, GIF remains competitive, but the margins are modest. Ablations confirm the value of importance estimates, proportional allocation, and the full-space fallback. Our Hyperparameter Importance Assessment (HIA) also recovers the intended anisotropy on those asymmetric analytic functions. Overall, GIF offers a simple, plug-compatible approach for more sample-efficient HPO in high-dimensional spaces, with potential relevance to deep-model tuning and large-scale AutoML.
Supplementary Material: zip
Primary Area: optimization
Submission Number: 22299
Loading