LipARELU: ARELU Networks aided by Lipschitz AccelerationDownload PDFOpen Website

2021 (modified: 20 May 2022)IJCNN 2021Readers: Everyone
Abstract: We present LipARELU, a novel framework for training L-hidden layer neural networks, equipped to handle large, adaptive learning rates. The framework is based on smoothness assumptions on the proposed activation function, AREL U. The generalization and approximation abilities of ARELU are discussed in detail. The framework assumes weaker conditions on the loss functions used to train the network. Using the fact that the inverse of the Lipschitz constant of the loss function is an ideal learning rate, we compute Lipschitz Adaptive Learning Rates (LALR) for tractable functions such as the Quadratic Loss (QL) and robust functions such as the Mean Absolute Error (MAE). Theoretical and experimental validation testify strength of our approach on several datasets in regression and classification tasks. The performance of our method is comparable to the current state-of-the-art (SOTA) methods.
0 Replies

Loading