On Hyperparameter Optimization in Learning SystemsDownload PDF

28 Mar 2024 (modified: 21 Feb 2017)ICLR 2017 workshop submissionReaders: Everyone
Abstract: We study two procedures (reverse-mode and forward-mode) for computing the gradient of the validation error with respect to the hyperparameters of any iterative learning algorithm. These procedures mirror two ways of computing gradients for recurrent neural networks and have different trade-offs in terms of running time and space requirements. The reverse-mode procedure extends previous work by (Maclaurin et al. 2015) and offers the opportunity to insert constraints on the hyperparameters in a natural way. The forward-mode procedure is suitable for stochastic hyperparameter updates, which may significantly speedup the overall hyperparameter optimization procedure.
Conflicts: @ucl.ac.uk, @cs.ucl.ac.uk, @iit.it
4 Replies

Loading