Keywords: Reinforcement Learning, Inverse Constrained Reinforcement Learning, Healthcare
Abstract: Reinforcement Learning (RL) applied in healthcare can lead to unsafe medical decisions and treatment, such as excessive dosages or abrupt changes, often due to agents overlooking common-sense constraints. Consequently, Constrained Reinforcement Learning (CRL) is a natural choice for safe decisions. However, specifying the exact cost function is inherently difficult in healthcare. Recent Inverse Constrained Reinforcement Learning (ICRL) is a promising approach that infers constraints from expert demonstrations. ICRL algorithms model Markovian decisions in an interactive environment. These settings do not align with the practical requirement of a decision-making system in healthcare, where decisions rely on historical treatment recorded in an offline dataset. To tackle these issues, we propose the Constraint Transformer (CT). Specifically, 1) utilize causal attention mechanism to incorporate historical decisions and observations into the constraint modeling and employ a non-Markovian layer for weighted constraints to capture critical states, 2) generative world model to perform exploratory data augmentation, thereby enabling offline RL methods to generate unsafe decision sequences. In multiple medical scenarios, empirical results demonstrate that CT can capture unsafe states and achieve strategies that approximate lower mortality rates, reducing the occurrence probability of unsafe behaviors.
Supplementary Material: zip
Primary Area: Machine learning for healthcare
Submission Number: 6983
Loading