Robust Model-Based Reinforcement Learning with an Adversarial Auxiliary Model

Published: 07 Aug 2024, Last Modified: 07 Aug 2024RLSW 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Confirmation: Yes
Keywords: robustness, rmdp, model-based, reinforcement learning
TL;DR: This paper introduces a novel reinforcement learning approach that enhances policy robustness by incorporating an adversarially updated, pessimistic model to estimate the worst-case MDP within a Kullback-Leibler uncertainty set.
Abstract: Reinforcement learning has demonstrated impressive performance in various challenging problems such as robotics, board games, and classical arcade games. However, its real-world applications can be hindered by the absence of robustness and safety in the learned policies. More specifically, an RL agent that trains in a certain Markov decision process (MDP) often struggles to perform well in nearly identical MDPs. To address this issue, we employ the framework of Robust MDPs (RMDPs) in a model-based setting and introduce a novel learned transition model. Our method specifically incorporates an auxiliary pessimistic model, updated adversarially, to estimate the worst-case MDP within a Kullback-Leibler uncertainty set. In comparison to several existing works, our work does not impose any additional conditions on the training environment, such as the need for a parametric simulator. To test the effectiveness of the proposed pessimistic model in enhancing policy robustness, we integrate it into a practical RL algorithm, called Robust Model-Based Policy Optimization (RMBPO). Our experimental results indicate a notable improvement in policy robustness on high-dimensional MuJoCo control tasks, with the auxiliary model enhancing the performance of the learned policy in distorted MDPs. We further explore the learned deviation between the proposed auxiliary world model and the nominal model, to examine how pessimism is achieved. By learning a pessimistic world model and demonstrating its role in improving policy robustness, our research contributes towards making (model-based) RL more robust.
Submission Number: 9
Loading