Non-Linear $H_\infty$ Robustness Guarantees for Neural Network Policies

Published: 17 Jun 2024, Last Modified: 17 Jun 2024FoRLaC PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Robust control methods ensure system stability under disturbances but often fall short in performance when applied to non-linear systems. Neural-network based control methods trained using deep reinforcement learning (RL) have achieved state-of-the-art performance on many challenging non-linear tasks but often lack robustness guarantees. Prior work proposed a method to enforce robust control guarantees within neural network policies, improving average-case performance over existing robust control methods and worst-case stability over deep RL methods. However, this method assumed linear time-invariant dynamics, which restricts the allowable actions and reduces the flexibility of neural network policies in handling non-linear dynamics. This paper presents a novel approach to enforce non-linear \(H_{\infty}\) robustness guarantees for neural network policies, as well as a tunable robustness parameter that allows for a trading off robustness and average performance, which is an essential feature for real-world deployments. Although the experimental validation of our approach is still ongoing, we believe that the theoretical foundations presented here advance us towards the deployment of robust neural network policies in practical applications, by offering a comprehensive solution for enhancing performance and robustness in non-linear dynamic systems.
Format: Long format (up to 8 pages + refs, appendix)
Publication Status: No
Submission Number: 81
Loading