Keywords: Inverse Reinforcement Learning, Imitation Learning, Successor Features
Abstract: In inverse reinforcement learning (IRL), an agent seeks to replicate expert demonstrations through interactions with the environment.
Traditionally, IRL is treated as an adversarial game, where an adversary searches over reward models, and a learner optimizes the reward through repeated RL procedures.
This game-solving approach is both computationally expensive and difficult to stabilize.
In this work, we propose a novel approach to IRL by _direct policy search_:
by exploiting a linear factorization of the return as the inner product of successor features and a reward vector, we design an IRL algorithm by policy gradient descent on the gap between the learner and expert features.
Our non-adversarial method does not require learning an explicit reward function and can be solved seamlessly with existing RL algorithms.
Remarkably, our approach works in state-only settings without expert action labels, a setting which behavior cloning (BC) cannot solve.
Empirical results demonstrate that our method learns from as few as a single expert demonstration and achieves improved performance on various control tasks.
Primary Area: reinforcement learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 11360
Loading