Keywords: neural networks, train data selection, interpolation, mesh optimization
TL;DR: We investigate how to define an optimal set of train datapoints and how this can speed up training in deep neural networks for regression.
Abstract: While large datasets facilitate the learning of a robust representation of the data manifold, the ability to obtain similar performance over small datasets is clearly computationally advantageous. This work considers deep neural networks for regression and aims to better understand how to select datapoints to minimize the neural network training time; a particular focus is on gaining insight into the structure and amount of datapoints needed to learn a robust function representation and how the training time varies for deep and wide architectures.
3 Replies
Loading