Keywords: imitation learning, robust, uncertainty
TL;DR: This paper presents a method for using uncertainty quantification to improve agent performance.
Abstract: Machine-learning paradigms such as imitation learning and reinforcement learning can generate highly performant agents in a variety of complex environments. However, commonly used methods require large quantities of data and/or a known reward function. Without extensive data, agents may find themselves outside of their training distribution, which could lead to unsafe behavior. This paper presents a method called Continuous Mean-Zero Disagreement-Regularized Imitation Learning (CMZ-DRIL) that employs a novel reward structure to improve the performance of imitation-learning agents that have access to only a handful of expert demonstrations.
CMZ-DRIL uses reinforcement learning to minimize uncertainty among an ensemble of agents trained to model the expert demonstrations.
This method does not use any environment-specific rewards, but creates a continuous and mean-zero reward function from the action disagreement of the agent ensemble, aiming to train the agent to return to states of higher certainty or those that resemble the training distribution. As demonstrated in a waypoint-navigation environment and in two MuJoCo environments, CMZ-DRIL can generate more performant agents than previous approaches in several key metrics.
Submission Number: 24
Loading