AMBER: An Entropy Maximizing Environment Design Algorithm for Inverse Reinforcement Learning

Published: 17 Jun 2024, Last Modified: 02 Jul 2024ICML 2024 Workshop MHFAIA PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Inverse Reinforcement Learning, Active Learning
Abstract: In Inverse Reinforcement Learning (IRL), we learn the underlying reward function of humans from observations. Recent work shows that we can learn the reward function more accurately by observing the human in multiple related environments, but efficiently finding informative environments is an open question. We present $\texttt{AMBER}$, an information-theoretic algorithm that generates highly informative environments. With theoretical and empirical analysis, we show that $\texttt{AMBER}$ efficiently finds informative environments and improves reward learning.
Submission Number: 46
Loading