Practical Hyperparameter Optimization for Deep Learning

Stefan Falkner, Aaron Klein, Frank Hutter

Feb 12, 2018 (modified: Jun 04, 2018) ICLR 2018 Workshop Submission readers: everyone Show Bibtex
  • Abstract: Recently, the bandit-based strategy Hyperband (HB) was shown to yield good hyperparameter settings of deep neural networks faster than vanilla Bayesian optimization (BO). However, for larger budgets, HB is limited by its random search component, and BO works better. We propose to combine the benefits of both approaches to obtain a new practical state-of-the-art hyperparameter optimization method, which we show to consistently outperform both HB and BO on a range of problem types, including feed-forward neural networks, Bayesian neural networks, and deep reinforcement learning. Our method is robust and versatile, while at the same time being conceptually simple and easy to implement.
  • Keywords: Hyperparameter optimization, Bayesian optimization
  • TL;DR: We combine Bayesian optimization and Hyperband to obtain a practical hyperparameter optimizer that consistently outperforms both of them.
0 Replies

Loading