Non-Linear $H_\infty$ Robustness Guarantees for Neural Network Policies

Published: 17 Jun 2024, Last Modified: 28 Jul 2024FoRLaC PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Robust control methods ensure system stability under disturbances but often fall short in performance when applied to non-linear systems. Neural-network based control methods trained using deep reinforcement learning (RL) have achieved state-of-the-art performance on challenging non-linear tasks but often lack robustness guarantees. Prior work proposed a method to enforce robust control guarantees within neural network policies, improving average-case performance over robust control methods and worst-case stability over deep RL methods. However, this method assumes linear time-invariant dynamics, and therefore restricts the allowable actions, reduces the flexibility of neural network policies in handling non-linear dynamics, and may fail to stabilize non-linear systems. This paper generalizes prior work by proposing a framework for enforcing \emph{non-linear} \(H_{\infty}\) robustness guarantees for neural network policies, when the dynamics of the system can be approximated by a polynomial function. This generalization aims to improve both policy robustness and average-case performance in nonlinear systems. Additionally, our framework allows for tuning the \(H_{\infty}\)-control's \emph{$\mathcal{L}_2$-gain} parameter to trade-off robustness and average performance in neural network policies, which is an essential feature for real-world deployments. While the experimental validation of our framework is ongoing, the theoretical foundations presented here aim to facilitate the application of robust control principles to a wider range of non-linear systems, potentially improving both the robustness and average performance of neural network policies in safety-critical applications.
Format: Long format (up to 8 pages + refs, appendix)
Publication Status: No
Submission Number: 81
Loading