LTL-Constrained Policy Optimization with Cycle Experience Replay

TMLR Paper3426 Authors

02 Oct 2024 (modified: 05 Mar 2025)Decision pending for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Linear Temporal Logic (LTL) offers a precise means for constraining the behavior of reinforcement learning agents. However, in many settings where both satisfaction and optimality conditions are present, LTL is insufficient to capture both. Instead, LTL-constrained policy optimization, where the goal is to optimize a scalar reward under LTL constraints, is needed. This constrained optimization problem proves difficult in deep Reinforcement Learning (DRL) settings, where learned policies often ignore the LTL constraint due to the sparse nature of LTL satisfaction. To alleviate the sparsity issue, we introduce Cycle Experience Replay (CyclER), a novel reward shaping technique that exploits the underlying structure of the LTL constraint to guide a policy towards satisfaction by encouraging partial behaviors compliant with the constraint. We provide a theoretical guarantee that optimizing CyclER will achieve policies that satisfy the LTL constraint with near-optimal probability. We evaluate CyclER in three continuous control domains. Our experimental results show that optimizing CyclER in tandem with the existing scalar reward outperforms existing reward-shaping methods at finding performant LTL-satisfying policies.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: [12/17/24] Updated manuscript to address changes by reviewers srmk and AdC7. [12/25/24] Updated manuscript and abstract to address further requested changes by reviewer AdC7. [1/1/25] Updated manuscript to address further requested changes by reviewer AdC7. [1/6/25] Updated manuscript to fix formatting errors in previous version. [1/31/25] Updated manuscript to address requested changes by reviewer tqsv. [2/03/25] Updated manuscript to address further requested changes by reviewer tqsv.
Assigned Action Editor: ~Matteo_Papini1
Submission Number: 3426
Loading