## Optimism in Reinforcement Learning with Generalized Linear Function Approximation

28 Sept 2020, 15:53 (edited 18 Feb 2021)ICLR 2021 PosterReaders: Everyone
• Keywords: reinforcement learning, optimism, exploration, function approximation, theory, regret analysis, provable sample efficiency
• Abstract: We design a new provably efficient algorithm for episodic reinforcement learning with generalized linear function approximation. We analyze the algorithm under a new expressivity assumption that we call optimistic closure,'' which is strictly weaker than assumptions from prior analyses for the linear setting. With optimistic closure, we prove that our algorithm enjoys a regret bound of $\widetilde{O}\left(H\sqrt{d^3 T}\right)$ where $H$ is the horizon, $d$ is the dimensionality of the state-action features and $T$ is the number of episodes. This is the first statistically and computationally efficient algorithm for reinforcement learning with generalized linear functions.
• One-sentence Summary: A provably efficient (statistically and computationally) algorithm for reinforcement learning with generalized linear function approximation and no explicit dynamics assumptions.
• Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
13 Replies