On the Theory of Reinforcement Learning with Once-per-Episode FeedbackDownload PDF

Published: 09 Nov 2021, Last Modified: 05 May 2023NeurIPS 2021 PosterReaders: Everyone
Keywords: theoretical reinforcement learning, regret analysis, statistical learning theory, non-Markovian rewards, optimistic algorithms
TL;DR: We study a theory of reinforcement learning in which the learner receives feedback only once at the end of an episode, and provide a statistically and computationally efficient algorithm that achieves sublinear regret.
Abstract: We study a theory of reinforcement learning (RL) in which the learner receives binary feedback only once at the end of an episode. While this is an extreme test case for theory, it is also arguably more representative of real-world applications than the traditional requirement in RL practice that the learner receive feedback at every time step. Indeed, in many real-world applications of reinforcement learning, such as self-driving cars and robotics, it is easier to evaluate whether a learner's complete trajectory was either ``good'' or ``bad,'' but harder to provide a reward signal at each step. To show that learning is possible in this more challenging setting, we study the case where trajectory labels are generated by an unknown parametric model, and provide a statistically and computationally efficient algorithm that achieves sublinear regret.
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Supplementary Material: pdf
16 Replies

Loading