Keywords: Inverse reinforcement learning, Reinforcement learning, Imitation learning, Robots, Reward learning, Robot learning
TL;DR: We construct interpretable and compact reward model from observational data. This learned reward model can be directly used using standart reinforcement learning framework for solving complex tasks.
Abstract: In complex real-world tasks such as robotic manipulation and autonomous driving, collecting expert demonstrations is often more straightforward than specifying precise learning objectives and task descriptions. Learning from expert data can be achieved through behavioral cloning or by learning a reward function, i.e., inverse reinforcement learning. The latter allows for training with additional data outside the training distribution, guided by the inferred reward function. We propose a novel approach to construct compact and interpretable reward models from automatically selected state features. These inferred rewards have an explicit form and enable the learning of policies that closely match expert behavior by training standard reinforcement learning algorithms from scratch. We validate our method's performance in various robotic environments with continuous and high-dimensional state spaces.
Supplementary Material: zip
Website: https://sites.google.com/view/transparent-reward
Code: https://github.com/baimukashev/reward-learning
Publication Agreement: pdf
Student Paper: yes
Spotlight Video: mp4
Submission Number: 696
Loading