Approximation with Random Shallow ReLU Networks with Applications to Model Reference Adaptive Control

Published: 01 Jan 2024, Last Modified: 15 Apr 2025CDC 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Neural networks are regularly employed in adaptive control of nonlinear systems and related methods of reinforcement learning. A common architecture uses a neural network with a single hidden layer (i.e. a shallow network), in which the weights and biases are fixed in advance and only the output layer is trained. While classical results show that there exist neural networks of this type that can approximate arbitrary continuous functions, they are non-constructive, and the networks used in practice have no approximation guarantees. Thus, the approximation properties required for control with neural networks are assumed, rather than proved. In this paper, we aim to fill this gap by showing that for sufficiently smooth functions, ReLU networks with randomly generated weights and biases achieve $L_{\infty}$ error of $O\left(m^{-1 / 2}\right)$ with high probability, where m is the number of neurons. We show how the result can be used to construct approximators of required accuracy in a model reference adaptive control application.
Loading