Abstract: The resilient backpropagation (Rprop) algorithms are fast and accurate batch learning methods for neural networks. We describe their implementation in the popular machine learning framework TensorFlow. We present the first empirical evaluation of Rprop for training recurrent neural networks with gated recurrent units. In our experiments, Rprop with default hyperparameters outperformed vanilla steepest descent as well as the optimization algorithms RMSprop and Adam even if their hyperparameters were tuned.
TL;DR: For batch learning in TensorFlow, Rprop is at the moment the method of choice and the times of fiddling around with learning rates etc. are over.
Keywords: TensorFlow, Resilient Backpropagation, Rprop, gradient-based optimization, learning algorithm
4 Replies
Loading