Understanding Inverse Reinforcement Learning under Overparameterization: Non-Asymptotic Analysis and Global Optimality
Abstract: The goal of the Inverse reinforcement learning (IRL) task is to identify the underlying reward function and the corresponding optimal policy
from a set of expert demonstrations.
While most IRL algorithms’ theoretical guarantees rely on a linear reward structure, we aim to extend the theoretical understanding of IRL to scenarios where the reward function is parameterized by neural networks.
Meanwhile, conventional IRL algorithms usually adopt a nested structure, leading to computational inefficiency, especially in high-dimensional settings.
To address this problem, we propose the first two-timescale single-loop IRL algorithm under neural network parameterized
reward and provide a non-asymptotic convergence analysis under overparameterization.
Although prior optimality results for linear rewards do not apply, we show that our algorithm can identify the globally optimal reward
and policy under certain neural network structures.
This is the first IRL algorithm with a non-asymptotic convergence guarantee that
provably achieves global optimality in neural network settings.
Submission Number: 1017
Loading