Your Policy Regularizer is Secretly an Adversary

Published: 10 Jul 2022, Last Modified: 30 Jun 2023Accepted by TMLREveryoneRevisionsBibTeX
Authors that are also TMLR Expert Reviewers: ~Tim_Genewein1
Abstract: Policy regularization methods such as maximum entropy regularization are widely used in reinforcement learning to improve the robustness of a learned policy. In this paper, we unify and extend recent work showing that this robustness arises from hedging against worst-case perturbations of the reward function, which are chosen from a limited set by an implicit adversary. Using convex duality, we characterize the robust set of adversarial reward perturbations under KL- and $\alpha$-divergence regularization, which includes Shannon and Tsallis entropy regularization as special cases. Importantly, generalization guarantees can be given within this robust set. We provide detailed discussion of the worst-case reward perturbations, and present intuitive empirical examples to illustrate this robustness and its relationship with generalization. Finally, we discuss how our analysis complements previous results on adversarial reward robustness and path consistency optimality conditions.
Certifications: Expert Certification
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: All changes to the manuscript are marked in green in the revised PDF file. We have: * Incorporated the concrete suggestions by 1MHa. * Moved Table 2 to the beginning of the paper. * Simplified Fig. 3. * Updated the abstract to make it clear that the equivalence result is not novel. After additional discussion, we have improved notation, added a notation paragraph, and included a description of the proof of Prop 1.
Assigned Action Editor: ~Gergely_Neu1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 34
Loading