ReLEx: Regularisation for Linear Extrapolation in Neural Networks with Rectified Linear Units

Published: 01 Jan 2020, Last Modified: 20 May 2025SGAI Conf. 2020EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Despite the great success of neural networks in recent years, they are not providing useful extrapolation. In regression tasks, the popular Rectified Linear Units do enable unbounded linear extrapolation by neural networks, but their extrapolation behaviour varies widely and is largely independent of the training data. Our goal is instead to continue the local linear trend at the margin of the training data. Here we introduce ReLEx, a regularising method composed of a set of loss terms design to achieve this goal and reduce the variance of the extrapolation. We present a ReLEx implementation for single input, single output, and single hidden layer feed-forward networks. Our results demonstrate that ReLEx has little cost in terms of standard learning, i.e. interpolation, but enables controlled univariate linear extrapolation with ReLU neural networks.
Loading