Resilient Backpropagation (Rprop) for Batch-learning in TensorFlowDownload PDF

12 Feb 2018 (modified: 06 Sep 2018)ICLR 2018 Workshop SubmissionReaders: Everyone
  • Keywords: TensorFlow, Resilient Backpropagation, Rprop, gradient-based optimization, learning algorithm
  • TL;DR: For batch learning in TensorFlow, Rprop is at the moment the method of choice and the times of fiddling around with learning rates etc. are over.
  • Abstract: The resilient backpropagation (Rprop) algorithms are fast and accurate batch learning methods for neural networks. We describe their implementation in the popular machine learning framework TensorFlow. We present the first empirical evaluation of Rprop for training recurrent neural networks with gated recurrent units. In our experiments, Rprop with default hyperparameters outperformed vanilla steepest descent as well as the optimization algorithms RMSprop and Adam even if their hyperparameters were tuned.
4 Replies

Loading