Keywords: Inverse Reinforcement Learning
Abstract: This paper studies the problem that a learner aims to learn the reward function of the expert from the interaction with the expert and how to interact with the expert. We formulate the problem as a stochastic bi-level optimization problem and develop a double-loop algorithm "general-sum interactive inverse reinforcement learning" (GSIIRL). In the GSIIRL, the learner first learns the reward function of the expert in the inner loop and then learns how to interact with the expert in the outer loop. We theoretically prove the convergence of our algorithm and validate our algorithm through simulations.
Supplementary Material: zip
Primary Area: reinforcement learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7900
Loading