Taming Diffusion for Dataset Distillation with High Representativeness

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: A new paradigm for diffusion-based dataset distillation with high representativeness.
Abstract: Recent deep learning models demand larger datasets, driving the need for dataset distillation to create compact, cost-efficient datasets while maintaining performance. Due to the powerful image generation capability of diffusion, it has been introduced to this field for generating distilled images. In this paper, we systematically investigate issues present in current diffusion-based dataset distillation methods, including inaccurate distribution matching, distribution deviation with random noise, and separate sampling. Building on this, we propose D$^3$HR, a novel diffusion-based framework to generate distilled datasets with high representativeness. Specifically, we adopt DDIM inversion to map the latents of the full dataset from a low-normality latent domain to a high-normality Gaussian domain, preserving information and ensuring structural consistency to generate representative latents for the distilled dataset. Furthermore, we propose an efficient sampling scheme to better align the representative latents with the high-normality Gaussian distribution. Our comprehensive experiments demonstrate that D$^3$HR can achieve higher accuracy across different model architectures compared with state-of-the-art baselines in dataset distillation. Source code: https://github.com/lin-zhao-resoLve/D3HR.
Lay Summary: How can training efficiency, in terms of both time and memory, be improved through data reduction? Many researchers have explored this question by generating a small subset to replace the full training dataset. The key challenge lies in ensuring that this generated subset accurately approximates the distribution of the full dataset. Our paper aims to address this challenge by leveraging the generative capabilities of large models, inspired by several prior works. We conduct a systematic analysis of existing large model based methods and suggest that the key to improving performance lies in finding a simpler distribution for approximation. Motivated by this insight, we propose an efficient method for constructing a simpler distribution that better approximates the original distribution of the full dataset. Our method achieves the best performance across datasets of four different scales. To facilitate future research, we open-source the generated small datasets and code, aiming to support the community in enhancing training efficiency or developing more effective dataset compression methods under this new paradigm.
Link To Code: https://github.com/lin-zhao-resoLve/D3HR
Primary Area: Deep Learning
Keywords: dataset distillation
Submission Number: 378
Loading