Constrained meta-reinforcement learning for adaptable safety guarantee with differentiable convex programming
Abstract: Despite remarkable achievements in artifcial intelligence, the
deployability of learning-enabled systems in high-stakes realworld environments still faces persistent challenges. For example, in safety-critical domains like autonomous driving,
robotic manipulation, and healthcare, it is crucial not only
to achieve high performance but also to comply with given
constraints. Furthermore, adaptability becomes paramount in
non-stationary domains, where environmental parameters are
subject to change. While safety and adaptability are recognized as key qualities for the new generation of AI, current
approaches have not demonstrated effective adaptable performance in constrained settings. Hence, this paper breaks
new ground by studying the unique challenges of ensuring
safety in non-stationary environments by solving constrained
problems through the lens of the meta-learning approach
(learning-to-learn). While unconstrained meta-learning already encounters complexities in end-to-end differentiation
of the loss due to the bi-level nature, its constrained counterpart introduces an additional layer of diffculty, since the
constraints imposed on task-level updates complicate the differentiation process. To address the issue, we frst employ
successive convex-constrained policy updates across multiple tasks with differentiable convex programming, which allows meta-learning in constrained scenarios by enabling endto-end differentiation. This approach empowers the agent
to rapidly adapt to new tasks under non-stationarity while
ensuring compliance with safety constraints. We also provide a theoretical analysis demonstrating guaranteed monotonic improvement of our approach, justifying our algorithmic designs. Extensive simulations across diverse environments provide empirical validation with signifcant improvement over established benchmarks
Loading