Empirical Study on Optimizer Selection for Out-of-Distribution Generalization

Published: 16 Jun 2023, Last Modified: 16 Jun 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Modern deep learning systems do not generalize well when the test data distribution is slightly different to the training data distribution. While much promising work has been accomplished to address this fragility, a systematic study of the role of optimizers and their out-of-distribution generalization performance has not been undertaken. In this study, we examine the performance of popular first-order optimizers for different classes of distributional shift under empirical risk minimization and invariant risk minimization. We address this question for image and text classification using DomainBed, WILDS, and Backgrounds Challenge as testbeds for studying different types of shifts---namely correlation and diversity shift. We search over a wide range of hyperparameters and examine classification accuracy (in-distribution and out-of-distribution) for over 20,000 models. We arrive at the following findings, which we expect to be helpful for practitioners: i) adaptive optimizers (e.g., Adam) perform worse than non-adaptive optimizers (e.g., SGD, momentum SGD) on out-of-distribution performance. In particular, even though there is no significant difference in in-distribution performance, we show a measurable difference in out-of-distribution performance. ii) in-distribution performance and out-of-distribution performance exhibit three types of behavior depending on the dataset---linear returns, increasing returns, and diminishing returns. For example, in the training of natural language data using Adam, fine-tuning the performance of in-distribution performance does not significantly contribute to the out-of-distribution generalization performance.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Camera-ready version - Removing red color for edits - Deanonymizing author list - Adding acknowledgments section - Adding URL to the source code of the experiments
Code: https://github.com/Hiroki11x/Optimizer_Comparison_OOD
Supplementary Material: pdf
Assigned Action Editor: ~Robert_M._Gower1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 924
Loading