Global optimality of softmax policy gradient with single hidden layer neural networks in the mean-field regimeDownload PDF

Published: 12 Jan 2021, Last Modified: 05 May 2023ICLR 2021 PosterReaders: Everyone
Keywords: policy gradient, entropy regularization, mean-field dynamics, neural networks
Abstract: We study the problem of policy optimization for infinite-horizon discounted Markov Decision Processes with softmax policy and nonlinear function approximation trained with policy gradient algorithms. We concentrate on the training dynamics in the mean-field regime, modeling e.g. the behavior of wide single hidden layer neural networks, when exploration is encouraged through entropy regularization. The dynamics of these models is established as a Wasserstein gradient flow of distributions in parameter space. We further prove global optimality of the fixed points of this dynamics under mild conditions on their initialization.
One-sentence Summary: We prove that softmax policy gradient algorithms with single hidden layer neural networks in the mean-field regime can be expressed as a gradient flow in Wasserstein space and prove that all the fixed points of such dynamics are global optimizers
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
10 Replies

Loading