Sample complexity of variance-reduced policy gradient: weaker assumptions and lower bounds

Published: 01 Jan 2024, Last Modified: 26 Jan 2025Mach. Learn. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Several variance-reduced versions of REINFORCE based on importance sampling achieve an improved \(O(\epsilon ^{-3})\) sample complexity to find an \(\epsilon\)-stationary point, under an unrealistic assumption on the variance of the importance weights. In this paper, we propose the Defensive Policy Gradient (DEF-PG) algorithm, based on defensive importance sampling, achieving the same result without any assumption on the variance of the importance weights. We also show that this is not improvable by establishing a matching \(\Omega (\epsilon ^{-3})\) lower bound, and that REINFORCE with its \(O(\epsilon ^{-4})\) sample complexity is actually optimal under weaker assumptions on the policy class. Numerical simulations show promising results for the proposed technique compared to similar algorithms based on vanilla importance sampling.
Loading