Robust Reinforcement Learning in Continuous Control Tasks with Uncertainty Set RegularizationDownload PDF

Published: 30 Aug 2023, Last Modified: 16 Oct 2023CoRL 2023 PosterReaders: Everyone
Keywords: Reinforcement Learning, Robustness, Continuous Control, Robotics
Abstract: Reinforcement learning (RL) is recognized as lacking generalization and robustness under environmental perturbations, which excessively restricts its application for real-world robotics. Prior work claimed that adding regularization to the value function is equivalent to learning a robust policy under uncertain transitions. Although the regularization-robustness transformation is appealing for its simplicity and efficiency, it is still lacking in continuous control tasks. In this paper, we propose a new regularizer named $\textbf{U}$ncertainty $\textbf{S}$et $\textbf{R}$egularizer (USR), to formulate the uncertainty set on the parametric space of a transition function. To deal with unknown uncertainty sets, we further propose a novel adversarial approach to generate them based on the value function. We evaluate USR on the Real-world Reinforcement Learning (RWRL) benchmark and the Unitree A1 Robot, demonstrating improvements in the robust performance of perturbed testing environments and sim-to-real scenarios.
Student First Author: yes
Supplementary Material: zip
Instructions: I have read the instructions for authors (https://corl2023.org/instructions-for-authors/)
Code: github.com/mikezhang95/rrl_usr
Publication Agreement: pdf
Poster Spotlight Video: mp4
12 Replies

Loading