SCERL: A Benchmark for intersecting language and safe reinforcement learningDownload PDF

Published: 21 Oct 2022, Last Modified: 05 May 2023LaReL 2022Readers: Everyone
Keywords: safety, constraints, text-based reinforcement learning
TL;DR: A new benchmark with safety constraints for language-instructed Reinforcement Learning agents.
Abstract: The issue of safety and robustness is a critical focus for AI research. Two lines of research are so far distinct, namely \(i) safe reinforcement learning, where an agent needs to interact with the world under safety constraints, and (ii) textual reinforcement learning, where agents need to perform robust reasoning and modelling of the state of the environment. In this paper, we propose Safety-Constrained Environments for Reinforcement Learning (SCERL), a benchmark to bridge the gap between these two research directions. The contribution of this benchmark is safety-relevant environments with i) a sample set of 20 games built on new logical rules to represent physical safety issues; ii) added monitoring of safety violations and iii) a mechanism to further generate a more diverse set of games with safety constraints and their corresponding metrics of safety types and difficulties. This paper shows selected baseline results on the benchmark. Our aim is for the SCERL benchmark and its flexible framework to provide a set of tasks to demonstrate language-based safety challenges to inspire the research community to further explore safety applications in a text-based domain.
3 Replies

Loading