Abstract: Foundation models are trained on large-scale web-crawled datasets, which often contain noise, biases, and irrelevant information. This motivates the use of data selection techniques, which can be divided into model-free variants -- relying on heuristic rules and downstream datasets -- and model-based, e.g., using influence functions. The former can be expensive to design and risk introducing unwanted dependencies, while the latter are often computationally prohibitive. Instead, we propose an efficient, model-based approach using the Mimic Score, a new data quality metric that leverages the weights of a reference model to assess the usefulness of individual samples for training a new model. It relies on the alignment between gradients and a target direction induced by the reference model. Using the derived Mimic Scores, we develop Grad-Mimic, a framework that prioritizes samples for learning, creates effective filters, and automates data selection. Empirically, using Mimic Scores to guide training improves data efficiency, results in consistent performance gains across six image datasets, and includes enhancements to CLIP models. Moreover, Mimic Score-based filters improve upon existing filtering methods, e.g., cutting 4.7 million samples to train better CLIP models while offering accurate estimation of training dataset quality.
Loading