Track: Research Track
Keywords: Reinforcement Learning, Function Approximation, Optimization
Abstract: We study reinforcement learning (RL) in the agnostic policy learning setting, where the goal is to find a policy whose performance is competitive with the best policy in a given class of interest $\Pi$---crucially, without assuming that $\Pi$ contains the optimal policy.
We propose a general policy learning framework that reduces this problem to first-order optimization in a non-Euclidean space, leading to new algorithms as well as shedding light on the convergence properties of existing ones.
Specifically, under the assumption that $\Pi$ is convex and satisfies a variational gradient dominance (VGD) condition---an assumption known to be strictly weaker than more standard completeness and coverability conditions---we obtain sample complexity upper bounds for three policy learning algorithms: \emph{(i)} Steepest Descent Policy Optimization, derived from a constrained steepest descent method for non-convex optimization; \emph{(ii)} the classical Conservative Policy Iteration algorithm \citep{kakade2002approximately} reinterpreted through the lens of the Frank-Wolfe method, which leads to improved convergence results; and \emph{(iii)} an on-policy instantiation of the well-studied Policy Mirror Descent algorithm. Finally, we empirically evaluate the VGD condition across several standard environments, demonstrating the practical relevance of our key assumption.
Submission Number: 12
Loading