Toward Computationally Efficient Inverse Reinforcement Learning via Reward Shaping

Published: 19 Mar 2024, Last Modified: 04 Apr 2024Tiny Papers @ ICLR 2024 PresentEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Inverse Reinforcement Learning, Reward Shaping, Computational Efficiency
TL;DR: We propose the usage of potential-based reward shaping to reduce the computational burden of IRL.
Abstract: Inverse reinforcement learning (IRL) is computationally challenging, with common approaches requiring the solution of multiple reinforcement learning (RL) sub-problems. This work motivates the use of potential-based reward shaping to reduce the computational burden of each RL sub-problem. This work serves as a proof-of-concept and we hope will inspire future developments towards computationally efficient IRL.
Supplementary Material: zip
Submission Number: 165
Loading