Data Distillation: A Survey

Published: 22 Jul 2023, Last Modified: 22 Jul 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: The popularity of deep learning has led to the curation of a vast number of massive and multifarious datasets. Despite having close-to-human performance on individual tasks, training parameter-hungry models on large datasets poses multi-faceted problems such as (a) high model-training time; (b) slow research iteration; and (c) poor eco-sustainability. As an alternative, data distillation approaches aim to synthesize terse data summaries, which can serve as effective drop-in replacements of the original dataset for scenarios like model training, inference, architecture search, etc. In this survey, we present a formal framework for data distillation, along with providing a detailed taxonomy of existing approaches. Additionally, we cover data distillation approaches for different data modalities, namely images, graphs, and user-item interactions (recommender systems), while also identifying current challenges and future research directions.
Certifications: Survey Certification
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: 1. Added author names 2. Added links to images 3. Added acknowledgments
Assigned Action Editor: ~bo_han2
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 767
Loading