PEAKS: Selecting Key Training Examples Incrementally via Prediction Error Anchored by Kernel Similarity
TL;DR: We pose the Incremental Data Selection problem, where examples arrive as a continuous stream, and we propose PEAKS, a principled and efficient method tailored for this problem.
Abstract: As deep learning continues to be driven by ever-larger datasets, understanding which examples are most important for generalization has become a critical question. While progress in data selection continues, emerging applications require studying this problem in dynamic contexts. To bridge this gap, we pose the Incremental Data Selection (IDS) problem, where examples arrive as a continuous stream, and need to be selected without access to the full data source. In this setting, the learner must incrementally build a training dataset of predefined size while simultaneously learning the underlying task. We find that in IDS, the impact of a new sample on the model state depends fundamentally on both its geometric relationship in the feature space and its prediction error. Leveraging this insight, we propose PEAKS (Prediction Error Anchored by Kernel Similarity), an efficient data selection method tailored for IDS. Our comprehensive evaluations demonstrate that PEAKS consistently outperforms existing selection strategies. Furthermore, PEAKS yields increasingly better performance returns than random selection as training data size grows on real-world datasets. The code is available at https://github.com/BurakGurbuz97/PEAKS.
Lay Summary: Modern AI systems require training on massive datasets, consuming enormous computational resources. This trend is unsustainable as datasets continue growing. We first propose incremental data selection problem to study identifying key training examples in a setting that reflects real-world constraints. We next study the impact of new training data on AI model performance mathematically, discovering that an example's value depends on both how much the model struggles with it and how similar it is to other examples in the training dataset. Based on these insights, we propose PEAKS, a principled data selection method that combines prediction error with similarity measures to identify the most valuable training examples. Our experiments show that PEAKS achieves similar performance to training on full datasets while using up to four times less data, making AI training more efficient and sustainable.
Link To Code: https://github.com/BurakGurbuz97/PEAKS
Primary Area: Deep Learning->Algorithms
Keywords: data pruning, coresets, data selection, deep learning
Submission Number: 12259
Loading