Keywords: Constrained MDP, Constraint Inference, Cost Inference, Credit Assignment
TL;DR: We learn decomposed safety scores from labeled trajectories using our constraint model and enabled safe RL using the learned safety constraint model
Abstract: In safe reinforcement learning (RL), auxiliary safety costs are used to align the agent to safe decision making. In practice, safety constraints, including cost functions and budgets, are unknown or hard to specify, as it requires anticipation of all possible unsafe behaviors. We therefore address a general setting where the true safety definition is unknown, and has to be learned from sparsely labeled data. Our key contributions are: $\textit{first}$, we design a safety model that performs $\textit{credit assignment}$ to estimate each decision step's impact on the overall safety using a dataset of diverse trajectories and their corresponding $\textit{binary}$ safety labels (i.e., whether the corresponding trajectory is safe/unsafe). $\textit{Second}$, we illustrate the architecture of our safety model to demonstrate its ability to learn a separate safety score for each timestep. $\textit{Third}$, we reformulate the safe RL problem using the proposed safety model and derive an effective algorithm to optimize a safe yet rewarding policy. $\textit{Finally}$, our empirical results corroborate our findings and show that this approach is effective in satisfying unknown safety definition, and scalable to various continuous control tasks.
Submission Type: Long Paper (9 Pages)
Archival Option: This is a non-archival submission
Presentation Venue Preference: ICLR 2025
Submission Number: 75
Loading