Training Neural Networks with Stochastic Hessian-Free OptimizationDownload PDF

24 Apr 2024 (modified: 17 Jan 2013)ICLR 2013 conference submissionReaders: Everyone
Decision: conferencePoster-iclr2013-conference
Abstract: Hessian-free (HF) optimization has been successfully used for training deep autoencoders and recurrent networks. HF uses the conjugate gradient algorithm to construct update directions through curvature-vector products that can be computed on the same order of time as gradients. In this paper we exploit this property and study stochastic HF with small gradient and curvature mini-batches independent of the dataset size for classification. We modify Martens' HF for this setting and integrate dropout, a method for preventing co-adaptation of feature detectors, to guard against overfitting. On classification tasks, stochastic HF achieves accelerated training and competitive results in comparison with dropout SGD without the need to tune learning rates.
9 Replies

Loading