Asymmetric DQN for Partially Observable Reinforcement LearningDownload PDF

Published: 20 May 2022, Last Modified: 05 May 2023UAI 2022 PosterReaders: Everyone
Keywords: Reinforcement Learning, Partial Observability, Offline Training, Privileged Information, Asymmetry, DQN
Abstract: Offline training in simulated partially observable environments allows reinforcement learning methods to exploit privileged state information through a mechanism known as asymmetry. Such privileged information has the potential to greatly improve the optimal convergence properties, if used appropriately. However, current research in asymmetric reinforcement learning is often heuristic in nature, with few connections to underlying theory or theoretical guarantees, and is primarily tested through empirical evaluation. In this work, we develop the theory of \emph{asymmetric policy iteration}, an exact model-based dynamic programming solution method, and then apply relaxations which eventually result in \emph{asymmetric DQN}, a model-free deep reinforcement learning algorithm. Our theoretical findings are complemented and validated by empirical experimentation performed in environments which exhibit significant amounts of partial observability, and require both information gathering strategies and memorization.
Supplementary Material: zip
4 Replies

Loading