Distributionally Robust Classification on a Data Budget

Published: 08 Aug 2023, Last Modified: 08 Aug 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Real world uses of deep learning require predictable model behavior under distribution shifts. Models such as CLIP show emergent natural distributional robustness comparable to humans, but may require hundreds of millions of training samples. Can we train robust learners in a domain where data is limited? To rigorously address this question, we introduce JANuS (Joint Annotations and Names Set), a collection of four new training datasets with images, labels, and corresponding captions, and perform a series of carefully controlled investigations of factors contributing to robustness in image classification, then compare those results to findings derived from a large-scale meta-analysis. Using this approach, we show that standard ResNet-50 trained with the cross-entropy loss on 2.4 million image samples can attain comparable robustness to a CLIP ResNet-50 trained on 400 million samples. To our knowledge, this is the first result showing (near) state-of-the-art distributional robustness on limited data budgets.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Camera ready version; added acknowledgements, minor style tweaks
Code: https://www.github.com/penfever/vlhub/
Supplementary Material: zip
Assigned Action Editor: ~Yarin_Gal1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1104