Keywords: Reinforcement Learning, Linear Function Approximation, Computational Complexity, NP-Hardness
TL;DR: We propose a novel reinforcement learning problem under more moderate and realistic assumptions, partial $q^{\pi}$-realizability, and show that learning remains computationally hard under greedy and softmax policy sets.
Abstract: This paper investigates the computational complexity of reinforcement learning within a novel linear function approximation regime, termed partial $q^{\pi}$-realizability. In this framework, the objective is to learn an $\epsilon$-optimal policy with respect to a predefined policy set $\Pi$, under the assumption that all value functions corresponding to policies in $\Pi$ are linearly realizable. This framework adopts assumptions that are weaker than those in the $q^{\pi}$-realizability setting yet stronger than those in the q*-realizability setup. As a result, it provides a more practical model for reinforcement learning scenarios where function approximation naturally arise. We prove that learning an $\epsilon$-optimal policy in this newly defined setting is computationally hard.
More specifically, we establish NP-hardness under a parameterized greedy policy set (i.e., argmax) and, further, show that—unless NP = RP—an exponential lower bound (exponential in feature vector dimension) holds when the policy set contains softmax policies, under the Randomized Exponential Time Hypothesis. Our hardness results mirror those obtained in the $q^*$-realizability settings, and suggest that computational difficulty persists even when the policy class $ \Pi $ is expanded beyond the optimal policy, reinforcing the unbreakable nature of the computational hardness result regarding partial $ q^{\pi} $-realizability under two important policy sets. To establish our negative result, our primary technical contribution is a reduction from two complexity problems, $\delta$-Max-3SAT and $\delta$-Max-3SAT($b$), to instances of our problem settings: GLinear-$\kappa$-RL (under the greedy policy set) and SLinear-$\kappa$-RL (under the softmax policy set), respectively. Our findings indicate that positive computational results are generally unattainable in the context of partial $ q^{\pi} $-realizability, in sharp contrast to the $ q^{\pi} $-realizability setting under a generative access model.
Primary Area: Reinforcement learning (e.g., decision and control, planning, hierarchical RL, robotics)
Submission Number: 22837
Loading