Counterexample-Guided Repair of Reinforcement Learning Systems Using Safety Critics

Published: 01 Jan 2024, Last Modified: 18 Dec 2024CoRR 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Naively trained Deep Reinforcement Learning agents may fail to satisfy vital safety constraints. To avoid costly retraining, we may desire to repair a previously trained reinforcement learning agent to obviate unsafe behaviour. We devise a counterexample-guided repair algorithm for repairing reinforcement learning systems leveraging safety critics. The algorithm jointly repairs a reinforcement learning agent and a safety critic using gradient-based constrained optimisation.
Loading