Contextual Markov Decision Processes using Generalized Linear ModelsDownload PDF

Published: 28 May 2019, Last Modified: 05 May 2023RL4RealLife 2019Readers: Everyone
Keywords: Reinforcement Learning, Markov Decision Processes, Online Learning, Regret Analysis, Contextual Learning
Abstract: We consider the recently proposed reinforcement learning (RL) framework of Contextual Markov Decision Processes (CMDP), where the agent has a sequence of episodic interactions with tabular environments chosen from a possibly infinite set. The parameters of these environments depend on a context vector that is available to the agent at the start of each episode. In this paper, we propose a no-regret online RL algorithm in the setting where the MDP parameters are obtained from the context using generalized linear models (GLMs). The proposed algorithm GL-ORL relies on efficient online updates and is also memory efficient. Our analysis of the algorithm gives new results in the logit link case and improves previous bounds in the linear case. Our work is theoretical and we primarily focus on regret bounds but we aim to highlight the ubiquitous sequential decision making problem of learning generalizable policies for a population of individuals.
3 Replies

Loading