Finding Stable Subnetworks at Initialization with Dataset Distillation

Published: 05 Mar 2025, Last Modified: 05 Mar 2025ICLR 2025 Workshop Weight Space Learning PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: long paper (up to 8 pages)
Keywords: Neural Network Pruning, Linear Mode Connectivity, Dataset Distillation
TL;DR: Using distilled data to prune neural networks leads to stable sparse models.
Submission Number: 8
Loading