Policy Optimization via Adv2: Adversarial Learning on Advantage Functions

TMLR Paper3924 Authors

09 Jan 2025 (modified: 09 May 2025)Decision pending for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: We revisit the reduction of learning in adversarial Markov decision processes [MDPs] to adversarial learning based on $Q$--values; this reduction has been considered in a number of recent articles as one building block to perform policy optimization. Namely, we first consider and extend this reduction in an ideal setting where an oracle provides value functions: it may involve any adversarial learning strategy (not just exponential weights) and it may be based indifferently on $Q$--values or on advantage functions. We then present two extensions: on the one hand, convergence of the last iterate for a vast class of adversarial learning strategies (again, not just exponential weights), satisfying a property called monotonicity of weights; on the other hand, stronger regret criteria for learning in MDPs, inherited from the stronger regret criteria of adversarial learning called strongly adaptive regret and tracking regret. Third, we demonstrate how adversarial learning, also referred to as aggregation of experts, relates to orchestration of expert policies (also known as imitation learning): we obtain stronger forms of performance guarantees in this setting than existing ones, via yet another, simple reduction. Finally, we discuss the impact of the reduction of learning in adversarial MDPs to adversarial learning in the practical scenarios where transition kernels are unknown and value functions must be learned. In particular, we review the literature and note that many strategies for policy optimization feature a policy-improvement step based on exponential weights with estimated $Q$--values. Our main message is that this step may be replaced by the application of any adversarial learning strategy on estimated $Q$--values or on estimated advantage functions. We leave the empirical evaluation of these twists for future research.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: Lihong Li
Submission Number: 3924
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview