Finding Stable Subnetworks at Initialization with Dataset Distillation
Track: long paper (up to 8 pages)
Keywords: Neural Network Pruning, Linear Mode Connectivity, Dataset Distillation
TL;DR: Using distilled data to prune neural networks leads to stable sparse models.
Submission Number: 8
Loading