Reinforcement Learning with General LTL Objectives is IntractableDownload PDF

Published: 19 Jan 2022, Last Modified: 05 May 2023CLeaR-Workshop PosterReaders: Everyone
Keywords: reinforcement learning, linear temporal logic, probably approximately correct
TL;DR: We prove that reinforcement learning algorithms cannot learn a near-optimal policy with high probability for any infinite-horizon LTL objective; conversely, such learning is only possible for finite-horizon LTL objectives.
Abstract: In recent years, researchers have made significant progress in devising reinforcement-learning algorithms for optimizing linear temporal logic (LTL) objectives and LTL-like objectives. Despite these advancements, there are fundamental limitations to how well this problem can be solved that previous studies have alluded to but, to our knowledge, have not examined in depth. In this paper, we address theoretically the hardness of learning with general LTL objectives. We formalize the problem under the probably approximately correct learning in Markov decision processes (PAC-MDP) framework, a standard framework for measuring sample complexity in reinforcement learning. In this formalization, we prove that the optimal policy for any LTL formula is PAC-MDP-learnable only if the formula is in the most limited class in the LTL hierarchy, consisting of only finite-horizon-decidable properties. Practically, our result implies that it is impossible for a reinforcement-learning algorithm to obtain a PAC-MDP guarantee on the performance of its learned policy after finitely many interactions with an unconstrained environment for non-finite-horizon-decidable LTL objectives.
Supplementary Material: pdf
0 Replies

Loading