- Keywords: Deployment Constrained Reinforcement Learning, Deep Reinforcement Learning, Model-based Reinforcement Learning
- Abstract: In many contemporary applications such as healthcare, finance, robotics, and recommendation systems, continuous deployment of new policies for data collection and online learning is either cost ineffective or impractical. We consider a setting that lies between pure offline reinforcement learning (RL) and pure online RL called deployment constrained RL in which the number of policy deployments for data sampling is limited. To solve this challenging task, we propose a novel algorithmic learning framework called Model-based Uncertainty Regularized batch Optimization (MURO). Our framework discovers novel and high quality samples for each deployment to enable efficient data collection. During each offline training session, we bootstrap the policy update by quantifying the amount of uncertainty within our collected data. In the high support region (low uncertainty), we encourage our policy by taking an aggressive update. In the low support region (high uncertainty) when the policy bootstraps into the out-of-distribution region, we downweight it by our estimated uncertainty quantification. Experimental results show that MURO achieves state-of-the-art performance in the deployment constrained RL setting.
- One-sentence Summary: We propose MURO (Model-based Uncertainty Regularized batch Optimization) a novel framework for deployment constrained reinforcement learning.