Abstract: Reinforcement learning provides a powerful and general framework for decision
making and control, but its application in practice is often hindered by the need
for extensive feature and reward engineering. Deep reinforcement learning methods
can remove the need for explicit engineering of policy or value features, but
still require a manually specified reward function. Inverse reinforcement learning
holds the promise of automatic reward acquisition, but has proven exceptionally
difficult to apply to large, high-dimensional problems with unknown dynamics. In
this work, we propose AIRL, a practical and scalable inverse reinforcement learning
algorithm based on an adversarial reward learning formulation that is competitive
with direct imitation learning algorithms. Additionally, we show that AIRL is
able to recover portable reward functions that are robust to changes in dynamics,
enabling us to learn policies even under significant variation in the environment
seen during training.
TL;DR: We propose an adversarial inverse reinforcement learning algorithm capable of learning reward functions which can transfer to new, unseen environments.
Keywords: inverse reinforcement learning, deep reinforcement learning
20 Replies
Loading