Representing Entropy : A short proof of the equivalence between soft Q-learning and policy gradientsDownload PDF

15 Feb 2018 (modified: 15 Feb 2018)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Two main families of reinforcement learning algorithms, Q-learning and policy gradients, have recently been proven to be equivalent when using a softmax relaxation on one part, and an entropic regularization on the other. We relate this result to the well-known convex duality of Shannon entropy and the softmax function. Such a result is also known as the Donsker-Varadhan formula. This provides a short proof of the equivalence. We then interpret this duality further, and use ideas of convex analysis to prove a new policy inequality relative to soft Q-learning.
TL;DR: A short proof of the equivalence of soft Q-learning and policy gradients.
Keywords: soft Q-learning, policy gradients, entropy, Legendre transformation, duality, convex analysis, Donsker-Varadhan
7 Replies

Loading