Data pruning and neural scaling laws: fundamental limitations of score-based algorithms

Published: 09 Nov 2023, Last Modified: 09 Nov 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Data pruning algorithms are commonly used to reduce the memory and computational cost of the optimization process. Recent empirical results (Guo, B. Zhao, and Bai, 2022) reveal that random data pruning remains a strong baseline and outperforms most existing data pruning methods in the high compression regime, i.e., where a fraction of 30% or less of the data is kept. This regime has recently attracted a lot of interest as a result of the role of data pruning in improving the so-called neural scaling laws; see (Sorscher et al., 2022), where the authors showed the need for high-quality data pruning algorithms in order to beat the sample power law. In this work, we focus on score-based data pruning algorithms and show theoretically and empirically why such algorithms fail in the high compression regime. We demonstrate “No Free Lunch" theorems for data pruning and discuss potential solutions to these limitations.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: 1. Clarification of the proof of Thm1 2. Clarification of the case $r=1$ for random pruning 3. Merged notation paragraphs 4. Added definitions of $r,n,m,w$ in Fig2 5. Added discussion about pruning time Vs training time 6. Added more detailed proofs of Corollaries 3 and 4 7. Added discussion about proportional limit 8. Added discussion about the limitations of asymptotic results
Assigned Action Editor: ~Daniel_M_Roy1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1252
Loading