Stochastic Re-weighted Gradient Descent via Distributionally Robust Optimization

Published: 01 Oct 2024, Last Modified: 01 Oct 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: We present Re-weighted Gradient Descent (RGD), a novel optimization technique that improves the performance of deep neural networks through dynamic sample re-weighting. Leveraging insights from distributionally robust optimization (DRO) with Kullback-Leibler divergence, our method dynamically assigns importance weights to training data during each optimization step. RGD is simple to implement, computationally efficient, and compatible with widely used optimizers such as SGD and Adam. We demonstrate the effectiveness of RGD on various learning tasks, including supervised learning, meta-learning, and out-of-domain generalization. Notably, RGD achieves state-of-the-art results on diverse benchmarks, with improvements of +0.7% on DomainBed, +1.44% on tabular classification, +1.94% on GLUE with BERT, and +1.01% on ImageNet-1K with ViT.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We have updated the camera-ready version with added authors, acknowledgments, the requested changes surrounding confidence intervals, and other requested changes.
Assigned Action Editor: ~Mathurin_Massias1
Submission Number: 2892
Loading