Bayesian Methods for Constraint Inference in Reinforcement Learning

Published: 28 Nov 2022, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Learning constraints from demonstrations provides a natural and efficient way to improve the safety of AI systems; however, prior work only considers learning a single, point-estimate of the constraints. By contrast, we consider the problem of inferring constraints from demonstrations using a Bayesian perspective. We propose Bayesian Inverse Constraint Reinforcement Learning (BICRL), a novel approach that infers a posterior probability distribution over constraints from demonstrated trajectories. The main advantages of BICRL, compared to prior constraint inference algorithms, are (1) the freedom to infer constraints from partial trajectories and even from disjoint state-action pairs, (2) the ability to infer constraints from suboptimal demonstrations and in stochastic environments, and (3) the opportunity to use the posterior distribution over constraints in order to implement active learning and robust policy optimization techniques. We show that BICRL outperforms pre-existing constraint learning approaches, leading to more accurate constraint inference and consequently safer policies. We further propose Hierarchical BICRL that infers constraints locally in sub-spaces of the entire domain and then composes global constraint estimates leading to accurate and computationally efficient constraint estimation.
Submission Length: Long submission (more than 12 pages of main content)
Video: https://drive.google.com/file/d/1_DMCjrmn5FygavUND1e71qJeNEK_Utch/view?usp=sharing
Code: https://github.com/gitcal/BICRL.git
Assigned Action Editor: ~Matthew_Walter1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 196
Loading