Selective Brain Damage: Measuring the Disparate Impact of Model PruningDownload PDF

25 Sept 2019 (modified: 22 Oct 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: Pruning deep neural networks has a non-uniform impact; certain classes and exemplars are systematically more impacted by the introduction of sparsity.
Abstract: Neural network pruning techniques have demonstrated it is possible to remove the majority of weights in a network with surprisingly little degradation to top-1 test set accuracy. However, this measure of performance conceals significant differences in how different classes and images are impacted by pruning. We find that certain individual data points, which we term pruning identified exemplars (PIEs), and classes are systematically more impacted by the introduction of sparsity. Removing PIE images from the test-set greatly improves top-1 accuracy for both sparse and non-sparse models. These hard-to-generalize-to images tend to be of lower image quality, mislabelled, entail abstract representations, require fine-grained classification or depict atypical class examples.
Keywords: machine learning
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:1911.05248/code)
Original Pdf: pdf
8 Replies

Loading