Model-based RL with Optimistic Posterior Sampling: Structural Conditions and Sample ComplexityDownload PDF

Published: 31 Oct 2022, Last Modified: 11 Oct 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: Reinforcement Learning, Model-based RL, Sample Complexity
TL;DR: We develop a general framework for model-based RL with optimistic posterior sampling, and a decoupling condition to bound the worst-case sampling complexity of this algorithm.
Abstract: We propose a general framework to design posterior sampling methods for model-based RL. We show that the proposed algorithms can be analyzed by reducing regret to Hellinger distance in conditional probability estimation. We further show that optimistic posterior sampling can control this Hellinger distance, when we measure model error via data likelihood. This technique allows us to design and analyze unified posterior sampling algorithms with state-of-the-art sample complexity guarantees for many model-based RL settings. We illustrate our general result in many special cases, demonstrating the versatility of our framework.
Supplementary Material: pdf
15 Replies