Keywords: hypernetworks, hyperparameter optimization, metalearning, neural networks, Bayesian optimization, game theory, optimization
TL;DR: We train a neural network to output approximately optimal weights as a function of hyperparameters.
Abstract: Machine learning models are often tuned by nesting optimization of model weights inside the optimization of hyperparameters. We give a method to collapse this nested optimization into joint stochastic optimization of weights and hyperparameters. Our process trains a neural network to output approximately optimal weights as a function of hyperparameters. We show that our technique converges to locally optimal weights and hyperparameters for sufficiently large hypernets. We compare this method to standard hyperparameter optimization strategies.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/stochastic-hyperparameter-optimization/code)
3 Replies
Loading