Exploring Dataset-Scale Indicators of Data Quality

NeurIPS 2023 Workshop ATTRIB Submission19 Authors

Published: 27 Oct 2023, Last Modified: 08 Dec 2023ATTRIB PosterEveryoneRevisionsBibTeX
Keywords: distributional robustness, computer vision, machine learning, deep learning, datasets, dataset design, data-driven machine learning
TL;DR: We conduct detailed ablation studies on the effects of two important dataset-scale indicators of quality: label set design, and class balance.
Abstract: Modern computer vision foundation models are trained on massive amounts of data, incurring large economic and environmental costs. Recent research has suggested that improving data quality can significantly reduce the need for data quantity. But what constitutes data quality in computer vision? We posit that the quality of a given dataset can be decomposed into distinct sample-level and dataset-level constituents, and that the former have been more extensively studied than the latter. We ablate the effects of two important dataset-level constituents: label set design, and class balance. By monitoring these constituents using key indicators we provide, researchers and practitioners can better anticipate model performance, measured in terms of its accuracy and robustness to distribution shifts.
Submission Number: 19
Loading