Auto-Encoding Inverse Reinforcement LearningDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: Adversarial Imitation Learning, Inverse Reinforcement Learning, Auto-Encoding
Abstract: Reinforcement learning (RL) provides a powerful framework for decision-making, but its application in practice often requires a carefully designed reward function. Inverse Reinforcement Learning (IRL) has shed light on automatic reward acquisition, but it is still difficult to apply IRL to solve real-world tasks. In this work, we propose Auto-Encoding Inverse Reinforcement Learning (AEIRL), a robust and scalable IRL framework, which belongs to the adversarial imitation learning class. To recover reward functions from expert demonstrations, AEIRL utilizes the reconstruction error of an auto-encoder as the learning signal, which provides more information for optimizing policies, compared to the binary logistic loss. Subsequently, we use the derived objective functions to train the reward function and the RL agent. Experiments show that AEIRL performs superior in comparison with state-of-the-art methods in the MuJoCo environments. More importantly, in more realistic settings, AEIRL shows much better robustness when the expert demonstrations are noisy. Specifically, our method achieves $16\%$ relative improvement compared to the best baseline FAIRL on clean expert data and $38\%$ relative improvement compared to the best baseline PWIL on noisy expert data both with the metric overall averaged scaled rewards.
One-sentence Summary: A new adversarial imitation learning method based on encoding-decoding process.
Supplementary Material: zip
11 Replies

Loading