Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Learning Robust Rewards with Adverserial Inverse Reinforcement Learning
Justin Fu, Katie Luo, Sergey Levine
Feb 15, 2018 (modified: Feb 22, 2018)ICLR 2018 Conference Blind Submissionreaders: everyoneShow Bibtex
Abstract:Reinforcement learning provides a powerful and general framework for decision
making and control, but its application in practice is often hindered by the need
for extensive feature and reward engineering. Deep reinforcement learning methods
can remove the need for explicit engineering of policy or value features, but
still require a manually specified reward function. Inverse reinforcement learning
holds the promise of automatic reward acquisition, but has proven exceptionally
difficult to apply to large, high-dimensional problems with unknown dynamics. In
this work, we propose AIRL, a practical and scalable inverse reinforcement learning
algorithm based on an adversarial reward learning formulation that is competitive
with direct imitation learning algorithms. Additionally, we show that AIRL is
able to recover portable reward functions that are robust to changes in dynamics,
enabling us to learn policies even under significant variation in the environment
seen during training.
TL;DR:We propose an adversarial inverse reinforcement learning algorithm capable of learning reward functions which can transfer to new, unseen environments.
Keywords:inverse reinforcement learning, deep reinforcement learning
Enter your feedback below and we'll get back to you as soon as possible.