Abstract: The performance of reinforcement learning (RL) in real-world applications can be hindered by the absence of robustness and safety in the learned policies. More specifically, an RL agent that trains in a certain Markov decision process (MDP) often struggles to perform well in MDPs that slightly deviate. To address this issue, we employ the framework of Robust MDPs (RMDPs) in a model-based setting and introduce a second learned transition model. Our method specifically incorporates an auxiliary pessimistic model, updated adversarially, to estimate the worst-case MDP within a Kullback-Leibler uncertainty set. In comparison to several existing works, our method does not impose any additional conditions on the training environment, such as the need for a parametric simulator. To test the effectiveness of the proposed pessimistic model in enhancing policy robustness, we integrate it into a practical RL algorithm, called Robust Model-Based Policy Optimization (RMBPO). Our experimental results indicate a notable improvement in policy robustness on high-dimensional control tasks, with the auxiliary model enhancing the performance of the learned policy in distorted MDPs, while maintaining the data-efficiency of the base algorithm. Our methodology is also compared against various other robust RL approaches. We further examine how pessimism is achieved by exploring the learned deviation between the proposed auxiliary world model and the nominal model. By introducing a pessimistic world model and demonstrating its role in improving policy robustness, our research presents a general methodology for robust reinforcement learning in a model-based setting.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=atbGktz250&referrer=%5BAuthor%20Console%5D(%2Fgroup%3Fid%3DTMLR%2FAuthors%23your-submissions)
Changes Since Last Submission: As requested by the editor and the reviewers, we have focussed on enhancing the empirical results. More specifically, we were asked to compare with a larger number of robust RL algorithms. We have improved this significantly from comparing with a single robust RL algorithm (RNAC) to 5 algorithms (RNAC, EWOK, QARL, RARL and MixedNE-LD). Furthermore, we have extended the amount of test environments from 3 to 5 (addition of Deepmind Control Suite: Walker Walk and Walker Run). We believe that this addresses the main reason for rejection. These results can be found in Figure 2, Figure 3 and Figure 4 in the main body, and in Appendix A (A1 and A3 are new).
In addition to the emperical results, we have focussed on removing speculative claims. We have removed all claims that we are optimizing the real uncertainty set and clarify that we use the KL divergence towards the approximate model instead of the environment as a practical step without much theoretical guarantees (Sec 3.1). We clarify that we rely on emperical results to demonstrate that our approach improves the robustness (Sec 3.1).
Finally, as per request of reviewer TzfP specifically, we have worked on fixing notational errors in the paper. Notably, we have clarified everywhere where we mean an uncertainty set $\mathcal{P}^{s,a}$ for a single state-action pair, and where we use a global uncertainty set $\mathcal{P}$, defined as the Cartesian product of each individual independent, marginal uncertainty set $\mathcal{P}^{s,a}$.
We have added code snippets that show how the auxiliary model is created and trained in Jax (Flax + Distrax) (Appendix G).
Assigned Action Editor: ~Marcello_Restelli1
Submission Number: 5795
Loading