Finding Safe Zones of Markov Decision Processes PoliciesDownload PDF

Published: 21 Nov 2022, Last Modified: 05 May 2023TSRML2022Readers: Everyone
Keywords: Safety, Reinforcement Learning, MDP, Markov chain
TL;DR: New method for finding Safe Zones of Markov Decision Processes policies which is provably a 2-approximation
Abstract: Safety is essential for gaining trust in Markov Decision Process’s policies. We suggest a new method to improve safety, using Safe Zone. Given a policy, we define its Safe Zone as a subset of states, such that most of the policy’s trajectories are confined to this subset. A trajectory not entirely inside the Safe Zone is potentially unsafe and should be examined. The quality of the Safe Zone is parameterized by the number of states and the escape probability, i.e., the probability that a random trajectory will leave the subset. Safe Zone are especially interesting when they have a small number of states and low escape probability. We study the complexity of finding optimal Safe Zone, and show that in general, the problem is computationally hard. For this reason, we concentrate on computing approximate Safe Zone. Our main result is a bi-criteria approximation algorithm which gives a factor of almost 2 approximation for both the escape probability and Safe Zone size, using a polynomial size sample complexity
3 Replies