Safe Reinforcement Learning with ADRC Lagrangian Method

ICLR 2026 Conference Submission15371 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: AI Safety, Safe Reinforcement Learning, Trustworthy
TL;DR: We introduce ADRC Lagrangian methods, which reduce oscillations and improve robustness compared to existing methods in safe reinforcement learning.
Abstract: Safe reinforcement learning (Safe RL) seeks to maximize rewards while satisfying safety constraints, typically addressed through Lagrangian-based methods. However, existing approaches, including PID and classical Lagrangian methods, suffer from oscillations and frequent safety violations due to parameter sensitivity and inherent phase lag. To address these limitations, we propose ADRC-Lagrangian methods that leverage Active Disturbance Rejection Control (ADRC) for enhanced robustness and reduced oscillations. Our unified framework encompasses classical and PID Lagrangian methods as special cases while significantly improving safety performance. Extensive experiments demonstrate that our approach reduces safety violations by up to 74\%, constraint violation magnitudes by 89\%, and average costs by 67\%, establishing superior effectiveness for Safe RL in complex environments.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 15371
Loading