Stochastic Re-weighted Gradient Descent via Distributionally Robust Optimization

TMLR Paper2892 Authors

19 Jun 2024 (modified: 24 Jun 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We present Re-weighted Gradient Descent (RGD), a novel optimization technique that improves the performance of deep neural networks through dynamic sample re-weighting. Leveraging insights from distributionally robust optimization (DRO) with Kullback-Leibler divergence, our method dynamically assigns importance weights to training data during each optimization step. RGD is simple to implement, computationally efficient, and compatible with widely used optimizers such as SGD and Adam. We demonstrate the effectiveness of RGD on various learning tasks, including supervised learning, meta-learning, and out-of-domain generalization. Notably, RGD achieves state-of-the-art results on diverse benchmarks, with improvements of +0.7% on DomainBed, +1.44% on tabular classification, +1.94% on GLUE with BERT, and +1.01% on ImageNet-1K with ViT.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: NA
Assigned Action Editor: ~Mathurin_Massias1
Submission Number: 2892
Loading