Stochastic Hyperparameter Optimization through HypernetworksDownload PDF

12 Feb 2018 (modified: 13 Apr 2025)ICLR 2018 Workshop SubmissionReaders: Everyone
Keywords: hypernetworks, hyperparameter optimization, metalearning, neural networks, Bayesian optimization, game theory, optimization
TL;DR: We train a neural network to output approximately optimal weights as a function of hyperparameters.
Abstract: Machine learning models are often tuned by nesting optimization of model weights inside the optimization of hyperparameters. We give a method to collapse this nested optimization into joint stochastic optimization of weights and hyperparameters. Our process trains a neural network to output approximately optimal weights as a function of hyperparameters. We show that our technique converges to locally optimal weights and hyperparameters for sufficiently large hypernets. We compare this method to standard hyperparameter optimization strategies.
Community Implementations:
3 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview