SemDeDup: Data-efficient learning at web-scale through semantic deduplicationDownload PDF

Published: 04 Mar 2023, Last Modified: 16 May 2023ME-FoMo 2023 PosterReaders: Everyone
Keywords: Data De-duplication, large scale datasets, LAION, data pruning
TL;DR: Semantic deduplication leverages pre-trained embeddings to allow faster training with less data at web-scale.
Abstract: Progress in machine learning has been driven in large part by massive increases in data. However, large web-scale datasets such as LAION are largely uncurated beyond searches for exact duplicates, potentially leaving much redundancy. Here, we introduce SemDeDup, a method which leverages embeddings from pre-trained models to identify and remove "semantic duplicates'': data pairs which are semantically similar, but not exactly identical. Removing semantic duplicates preserves performance and speeds up learning. Analyzing a subset of LAION, we show that SemDeDup can remove 50% of the data with minimal performance loss, effectively halving training time. Moreover performance increases out of distribution. Also, analyzing language models trained on C4, a partially curated dataset, we show that SemDeDup improves over prior approaches. SemDeDup provides an example of how simple ways of leveraging quality embeddings can be used to make models learn faster with less data.
0 Replies

Loading