On minimizing the training set fill distance in machine learning regression

Published: 11 Aug 2024, Last Modified: 11 Aug 2024Accepted by DMLREveryoneRevisionsBibTeX
Abstract: For regression tasks one often leverages large datasets for training predictive machine learning models. However, using large datasets may not be feasible due to computational limitations or high data labelling costs. Therefore, suitably selecting small training sets from large pools of unlabelled data points is essential to maximize model performance while maintaining efficiency. In this work, we study Farthest Point Sampling (FPS), a data selection approach that aims to minimize the fill distance of the selected set. We derive an upper bound for the maximum expected prediction error, conditional to the location of the unlabelled data points, that linearly depends on the training set fill distance. For empirical validation, we perform experiments using two regression models on three datasets. We empirically show that selecting a training set by aiming to minimize the fill distance, thereby minimizing our derived bound, significantly reduces the maximum prediction error of various regression models, outperforming alternative sampling approaches by a large margin. Furthermore, we show that selecting training sets with the FPS can also increase model stability for the specific case of Gaussian kernel regression approaches.
Certifications: Reproducibility Certification
Keywords: Fill distance, Farthest Point Sampling, Regression.
Code: https://github.com/Fraunhofer-SCAI/Fill_Distance_Regression
Assigned Action Editor: ~Yue_Zhao13
Submission Number: 20
Loading