DEEP NEURAL NETWORKS WITH RELU-SINE-EXPONENTIAL ACTIVATIONS BREAK CURSE OF DIMENSIONALITY ON HO¨LDER CLASS

06 Apr 2021 (modified: 12 May 2023)OpenReview Archive Direct UploadReaders: Everyone
Abstract: In this paper, we construct neural networks with ReLU, sine and as activation functions. For general continuous defined on with continuity modulus , we construct ReLU-sine- networks that enjoy an approximation rate , where denote the hyperparameters related to widths of the networks. As a consequence, we can construct ReLU-sine- network with the depth and width that approximates within a given tolerance measured in norm , where denotes the Hölder continuous function class defined on with order and constant . Therefore, the ReLU-sine- networks overcome the curse of dimensionality on . In addition to its supper expressive power, functions implemented by ReLU-sine- networks are (generalized) differentiable, enabling us to apply SGD to train.
0 Replies

Loading