Scalable Gradient-Based Tuning of Continuous Regularization Hyperparameters

Jelena Luketina, Mathias Berglund, Tapani Raiko

Feb 18, 2016 (modified: Feb 18, 2016) ICLR 2016 workshop submission readers: everyone
  • CMT id: 339
  • Abstract: Hyperparameter selection generally relies on running multiple full training trials, with hyperparameter selection based on validation set performance. We propose a gradient-based approach for locally adjusting hyperparameters during training of the model. Hyperparameters are adjusted so as to make the model parameter gradients, and hence updates, more advantageous for the validation cost. We explore the approach for tuning regularization hyperparameters and find that in experiments on MNIST the resulting regularization levels are within the optimal regions. The method is significantly less computationally demanding compared to similar gradient-based approaches to hyperparameter optimization and consistently finds good hyperparameter values, which makes it a useful tool for training neural network models.
  • Conflicts: aalto.fi, cai.fi

Loading