Keywords: Reinforcement Learning, Reward Shaping, Linear Temporal Logic (LTL)
TL;DR: This paper addresses the challenge of designing efficient reward functions for RL with LTL to handle environmental uncertainties.
Abstract: There is a surge of interest in using formal languages such as Linear Temporal Logic (LTL) to precisely and succinctly specify complex tasks and derive reward functions for Reinforcement Learning (RL). However, existing methods often assign sparse rewards (e.g., giving a reward of 1 only if a task is completed and 0 otherwise). By providing feedback solely upon task completion, these methods fail to encourage successful subtask completion. This is particularly problematic in environments with inherent uncertainty, where task completion may be unreliable despite progress on intermediate goals. To address this limitation, we propose a suite of reward functions that incentivize an RL agent to complete a task specified by an LTL formula as much as possible, and develop an adaptive reward shaping approach that dynamically updates reward functions during the learning process. Experimental results on a range of benchmark RL environments demonstrate that the proposed approach generally outperforms baselines, achieving earlier convergence to a better policy with higher expected return and task completion rate.
Latex Source Code: zip
Code Link: https://github.com/safe-autonomy-lab/AdaptiveRewardRL
Signed PMLR Licence Agreement: pdf
Readers: auai.org/UAI/2025/Conference, auai.org/UAI/2025/Conference/Area_Chairs, auai.org/UAI/2025/Conference/Reviewers, auai.org/UAI/2025/Conference/Submission517/Authors, auai.org/UAI/2025/Conference/Submission517/Reproducibility_Reviewers
Submission Number: 517
Loading