Getting away with more network pruning: From sparsity to geometry and linear regionsDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Abstract: One surprising trait of neural networks is the extent to which their connections can be pruned with little to no effect on accuracy. But when we cross a critical level of parameter sparsity, pruning any further leads to a sudden drop in accuracy. What could explain such a drop? In this work, we explore how sparsity may affect the geometry of the linear regions defined by a neural network and consequently reduce its expected maximum number of linear regions. We observe that sparsity affects accuracy in pruned neural networks similarly to how it affects the number of linear regions as well as - and more so - our proposed upper bound on that number. Conversely, we find out that selecting the sparsity on each layer to maximize the bound very often improves accuracy in comparison to using the same sparsity across all layers, thereby providing us guidance on where to prune.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
TL;DR: If we prune with the maximum number of linear regions in mind, we can improve accuracy considerably
Supplementary Material: zip
21 Replies

Loading