Abstract: Dataset distillation aims to compress a dataset into a much smaller one so that a model trained on the distilled dataset achieves high accuracy. Current methods frame this as maximizing the distilled classification accuracy for a budget of K distilled images-per-class, where K is a positive integer. In this paper, we push the boundaries of dataset distillation, compressing the dataset into less than an image-per-class. It is important to realize that the meaningful quantity is not the number of distilled images-per-class but the number of distilled pixels-per-dataset. We therefore, propose Poster Dataset Distillation (PoDD), a new approach that distills the entire original dataset into a single poster. The poster approach motivates new technical solutions for creating training images and learnable labels. Our method can achieve comparable or better performance with less than an image-per-class compared to existing methods that use one image-per-class. Specifically, our method establishes a new state-of-the-art performance on CIFAR-10, CIFAR-100, and CUB200 on the well established 1 IPC benchmark, while using as little as 0.3 images-per-class.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: **current revision:**
Camera ready + github
**previous revisions:**
- Added requested comments to the main paper.
- Added changes requested by the reviewers during the rebuttal (highlighted in yellow) and added the project code as the supplementary.
- initial revision
Code: https://github.com/AsafShul/PoDD
Assigned Action Editor: ~Jose_Dolz1
Submission Number: 3802
Loading