Robust Stochastic Optimization via Gradient Quantile Clipping

Published: 09 Oct 2024, Last Modified: 09 Oct 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: We introduce a clipping strategy for Stochastic Gradient Descent (SGD) which uses quantiles of the gradient norm as clipping thresholds. We prove that this new strategy provides a robust and efficient optimization algorithm for smooth objectives (convex or non-convex), that tolerates heavy-tailed samples (including infinite variance) and a fraction of outliers in the data stream akin to Huber contamination. Our mathematical analysis leverages the connection between constant step size SGD and Markov chains and handles the bias introduced by clipping in an original way. For strongly convex objectives, we prove that the iteration converges to a concentrated distribution and derive high probability bounds on the final estimation error. In the non-convex case, we prove that the limit distribution is localized on a neighborhood with low gradient. We propose an implementation of this algorithm using rolling quantiles which leads to a highly efficient optimization procedure with strong robustness properties, as confirmed by our numerical experiments.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Camera ready version.
Supplementary Material: pdf
Assigned Action Editor: ~Yunwen_Lei1
Submission Number: 2658
Loading