Keywords: continuous reinforcement learning, deep q-learning, optimal control problems, normalized advantage functions
Abstract: One of the most effective continuous deep reinforcement learning algorithms is normalized advantage functions (NAF). The main idea of NAF consists in the approximation of the Q-function by functions quadratic with respect to the action variable. This idea allows to apply the algorithm to continuous reinforcement learning problems, but on the other hand, it brings up the question of classes of problems in which this approximation is acceptable. The presented paper describes one such class. We consider reinforcement learning problems obtained by the time-discretization of certain optimal control problems. Based on the idea of NAF, we present a new family of quadratic functions and prove its suitable approximation properties. Taking these properties into account, we provide several ways to improve NAF. The experimental results confirm the efficiency of our improvements.
One-sentence Summary: We propose various modification of NAF algorithm for continuous reinforcement learning problems arising from optimal control problems
Supplementary Material: zip
9 Replies
Loading