Dynamic and Efficient Gray-Box Hyperparameter Optimization for Deep LearningDownload PDF

Published: 16 May 2022, Last Modified: 05 May 2023AutoML 2022 (Late-Breaking Workshop)Readers: Everyone
Abstract: Gray-box hyperparameter optimization techniques have recently emerged as a promising direction for tuning Deep Learning methods. In this work, we introduce DyHPO, a method that learns to dynamically decide which configuration to try next, and for what budget. Our technique is a modification to the classical Bayesian optimization for a gray-box setup. Concretely, we propose a new surrogate for Gaussian Processes that embeds the learning curve dynamics and a new acquisition function that incorporates multi-budget information. We demonstrate the significant superiority of DyHPO against state-of-the-art hyperparameter optimization baselines through large-scale experiments comprising 50 datasets (Tabular, Image, NLP) and diverse neural networks (MLP, CNN/NAS, RNN).
Keywords: hyperparameter optimization, multi-fidelity
One-sentence Summary: Efficient hyperparameter optimization of deep learning methods using dynamic exploration of hyperparameter settings by slowly increasing the budget.
Reproducibility Checklist: Yes
Broader Impact Statement: Yes
Paper Availability And License: Yes
Code Of Conduct: Yes
Reviewers: N/A
Main Paper And Supplementary Material: pdf
1 Reply

Loading