Iterative Batch Reinforcement Learning via Safe Diversified Model-based Policy Search

Published: 22 Oct 2024, Last Modified: 06 Nov 2024CoRL 2024 Workshop SAFE-ROL PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: reinforcement learning, offline reinforcement learning, diversity, safety, industrial control
TL;DR: In iterative offline RL combining safety and diversity is useful.
Abstract: Batch reinforcement learning enables policy learning without direct interaction with the environment during training, relying exclusively on previously collected sets of interactions. This approach is, therefore, well-suited for high-risk and cost-intensive applications, such as industrial control. Learned policies are commonly restricted to act in a similar fashion as observed in the batch. In a real-world scenario, learned policies are deployed in the industrial system, inevitably leading to the collection of new data that can subsequently be added to the existing recording. The process of learning and deployment can thus take place multiple times throughout the lifespan of a system. In this work, we propose to exploit this iterative nature of applying offline reinforcement learning to guide learned policies towards efficient and informative data collection during deployment, leading to continuous improvement of learned policies while remaining within the support of collected data. We present an algorithmic methodology for iterative batch reinforcement learning based on ensemble-based model-based policy search, augmented with safety and, importantly, a diversity criterion.
Submission Number: 21
Loading