Learning to Sample in Stochastic Optimization

Published: 07 May 2025, Last Modified: 13 Jun 2025UAI 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Randomized algorithms, generalization, PAC-Bayes
TL;DR: We learn adaptive sampling schemes for SGD-type methods by minimising novel PAC-Bayes bounds, for enhanced robustness.
Abstract: We consider a PAC-Bayes analysis of stochastic optimization algorithms, and devise a new SGDA algorithm inspired from our bounds. Our algorithm learns a data-dependent sampling scheme along with model parameters, which may be seen as assigning a probability to each training point. We demonstrate that learning the sampling scheme increases robustness against misleading training points, as our algorithm learns to avoid bad examples during training. We conduct experiments in both standard and adversarial learning problems on several benchmark datasets, and demonstrate various applications including interpretability upon visual inspection, and robustness to the ill effects of bad training points. We also extend our analysis to pairwise SGD to demonstrate the generalizability of our methodology.
Supplementary Material: zip
Latex Source Code: zip
Code Link: https://github.com/git0405/UAI-Learning-to-Sample-in-Stochastic-Optimization
Signed PMLR Licence Agreement: pdf
Readers: auai.org/UAI/2025/Conference, auai.org/UAI/2025/Conference/Area_Chairs, auai.org/UAI/2025/Conference/Reviewers, auai.org/UAI/2025/Conference/Submission144/Authors, auai.org/UAI/2025/Conference/Submission144/Reproducibility_Reviewers
Submission Number: 144
Loading