SHANG++: Robust Stochastic Acceleration under Multiplicative Noise

ICLR 2026 Conference Submission15512 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Stochastic Optimization, Accelerated Methods, Multiplicative Noise Scaling, Deep Learning
TL;DR: An accelerated stochastic optimizer that achieves optimal convergence under multiplicative noise scaling with minimal tuning and strong robustness across convex and deep learning tasks.
Abstract: Training with multiplicative noise scaling (MNS) is often destabilized by momentum methods such as Nesterov's acceleration, as gradient noise can overwhelm the signal. A new method, SHANG++, is introduced to achieve fast convergence while remaining robust under MNS. With only one-shot hyper-parameter tuning, SHANG++ consistently reaches accuracy within 1% of the noise-free setting across convex problems and deep networks. In experiments, it outperforms existing accelerated methods in both robustness and efficiency, demonstrating strong performance with minimal parameter sensitivity.
Supplementary Material: zip
Primary Area: optimization
Submission Number: 15512
Loading