Max-Affine Spline Insights Into Deep Network Pruning

Published: 01 Aug 2022, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: State-of-the-art (SOTA) approaches to deep network (DN) training overparametrize the model and then prune a posteriori to obtain a ``winning ticket'' subnetwork that can achieve high accuracy. Using a recently developed spline interpretation of DNs, we obtain novel insights into how DN pruning affects its mapping. In particular, under the realm of spline operators, we are able to pinpoint the impact of pruning onto the DN's underlying input space partition and per-region affine mappings, opening new avenues in understanding why and when are pruned DNs able to maintain high performance. We also discover that a DN's spline mapping exhibits an early-bird (EB) phenomenon whereby the spline's partition converges at early training stages, bridging the recently developed DN spline theory and lottery ticket hypothesis of DNs. We finally leverage this new insight to develop a principled and efficient pruning strategy whose goal is to prune isolated groups of nodes that have a redundant contribution in the forming of the spline partition. Extensive experiments on four networks and three datasets validate that our new spline-based DN pruning approach reduces training FLOPs by up to 3.5x while achieving similar or even better accuracy than current state-of-the-art methods. Code is available at https://github.com/RICE-EIC/Spline-EB.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: **Summary of changes:** - Major changes: - Addition of an entire section (3.1) dedicated to the theoretical study of Deep Networks under the Spline perspective. Accordingly, 3.2 (the original 3.1) has also been reworked, some of its content has been merged into the new 3.1. - Minor changes: - Downplayed in the abstract/introduction “the lack of existing theoretical studies on DN pruning” - Added references to alternative studies of pruning from different perspectives in the introduction and put in perspective with our work. - Downplayed the visualization tool aspect in the abstract/intro and the “universality” aspect of EB and our findings - Provide the standard deviations - Clarify the experiments on "efficiency" - Clarify why PCA can maintain the information - Clarify MASO and its connection to NN pruning in Sec 2 - Clarify the captions (and the relevant texts) of Figures
Assigned Action Editor: ~Zhe_Gan1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 132
Loading