Accelerating Residual Reinforcement Learning with Uncertainty Estimation

Published: 01 Jun 2025, Last Modified: 23 Jun 2025OOD Workshop @ RSS2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Residual Reinforcement Learning, Online Reinforcement Learning
Abstract: Residual Reinforcement Learning (RL) is a popular approach for adapting pretrained policies by learning a lightweight residual policy to provide corrective actions. While Residual RL is more sample-efficient compared to finetuning the entire base policy, existing methods struggle with sparse rewards and are designed for deterministic base policies. We propose two improvements to Residual RL that further enhance its sample efficiency and make it suitable for stochastic base policies. First, we leverage uncertainty estimates of the base policy to focus exploration on regions in which the base policy is not confident. Second, we propose a simple modification to off-policy residual learning that allows it to observe base actions and better handle stochastic base policies. We evaluate our method with both Gaussian-based and Diffusion-based stochastic base policies on tasks from Robosuite and D4RL, and compare against state-of-the-art finetuning methods, demo-augmented RL methods, and other residual RL methods. Our algorithm significantly outperforms existing baselines in a variety of difficult manipulation environments.
Supplementary Material: zip
Submission Number: 25
Loading