Keywords: Sparse adversarial perturbations, coreset selection, sample ranking, adversarial sensitivity scoring, FGSM
Abstract: Efficient training of deep neural networks under data constraints relies on selecting informative subsets, or coresets, that preserve model performance. Traditional methods in libraries like DeepCore employ heuristics such as uncertainty sampling or gradient diversity but often neglect adversarial vulnerabilities, leading to suboptimal robustness against distribution shifts, corruptions, or manipulations in unreliable data scenarios. To address this, we introduce a unified Adversarial Sensitivity Scoring framework comprising three novel ranking techniques: Inverse Sensitivity and Entropy Fusion (ISEF), Fast Gradient Sign Method with Composite Scoring (FGSM-CS), and Perturbation Sensitivity Scoring (PSS), that harness sparse adversarial perturbations to prioritize samples near decision boundaries. By applying single-step Sparse FGSM attacks, our methods expose sample sensitivities with minimal computational overhead. Evaluated on CIFAR-10 with ResNet-18, our approaches consistently outperform the adversarial baseline DeepFool by up to 15.1\% in extremely sparse data regimes (e.g., 1\% for PSS$_{\text{bottom}}$) and 15.1\% in low data regimes (e.g., 10\% for FGSM-CS$_{\text{bottom}}$), while achieving comparable results to top DeepCore methods like Random and Forgetting in moderate data regimes. Notably, bottom variants excel in sparse settings by retaining perturbation-resilient samples, with top variants surpassing them after 20--30\% boundaries. These gains, realized via efficient single-step gradients, position our framework as a scalable, deployable bridge between coreset selection and adversarial robustness, advancing data-efficient learning.
Submission Number: 73
Loading