On Hyperparameter Optimization in Learning Systems

Luca Franceschi, Michele Donini, Paolo Frasconi, Massimiliano Pontil

Feb 17, 2017 (modified: Feb 21, 2017) ICLR 2017 workshop submission readers: everyone
  • Abstract: We study two procedures (reverse-mode and forward-mode) for computing the gradient of the validation error with respect to the hyperparameters of any iterative learning algorithm. These procedures mirror two ways of computing gradients for recurrent neural networks and have different trade-offs in terms of running time and space requirements. The reverse-mode procedure extends previous work by (Maclaurin et al. 2015) and offers the opportunity to insert constraints on the hyperparameters in a natural way. The forward-mode procedure is suitable for stochastic hyperparameter updates, which may significantly speedup the overall hyperparameter optimization procedure.
  • Conflicts: @ucl.ac.uk, @cs.ucl.ac.uk, @iit.it

Loading