Efficient Bayesian Inverse Reinforcement Learning via Conditional Kernel Density EstimationDownload PDF

Published: 29 Jan 2022, Last Modified: 05 May 2023AABI 2022 PosterReaders: Everyone
Keywords: inverse reinforcement learning, conditional kernel density estimation, bayesian
TL;DR: We introduce kernel density Bayesian inverse reinforcement learning (KD-BIRL), an efficient framework to recover posterior estimates on a reinforcement learning agent's reward function.
Abstract: Inverse reinforcement learning (IRL) methods attempt to recover the reward function of an agent by observing its behavior. Given the large amount of uncertainty in the underlying reward function, it is often useful to model this function probabilistically, rather than estimate a single reward function. However, existing Bayesian approaches to IRL use a Q-value function to approximate the likelihood, leading to a computationally intractable and inflexible framework. Here, we introduce kernel density Bayesian IRL (KD-BIRL), a method that uses kernel density estimation to approximate the likelihood, or the probability of the observed states and actions given a reward function. This approximation allows for efficient posterior inference of the reward function given a sequence of agent observations. Empirically, using both linear and nonlinear reward functions in a Gridworld environment, we demonstrate that the KD-BIRL posterior centers around the true reward function.
1 Reply

Loading