Dual Approximation Policy Optimization

Published: 19 Jun 2024, Last Modified: 26 Jul 2024ARLET 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: reinforcement learning, policy mirror descent, general function approximation, duality
TL;DR: We propose a novel policy optimization framework that not only enjoys fast convergence under general function approximation, but also incorporates popular practical methods as special cases.
Abstract: We propose Dual Approximation Policy Optimization (DAPO), a framework that incorporates general function approximation into policy mirror descent methods. In contrast to the popular approach of using the $L_2$-norm to measure function approximation errors, DAPO uses the dual Bregman divergence induced by the mirror map for policy projection. This duality framework has both theoretical and practical implications: not only does it achieve fast linear convergence with general function approximation, but it also includes several well-known practical methods as special cases, immediately providing strong convergence guarantees.
Submission Number: 77
Loading