Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks

Published: 12 Aug 2023, Last Modified: 12 Aug 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: The importance of learning rate (LR) schedules on network pruning has been observed in a few recent works. As an example, Frankle and Carbin (2019) highlighted that winning tickets (i.e., accuracy preserving subnetworks) can not be found without applying a LR warmup schedule. Renda, Frankle and Carbin (2020) also demonstrated that rewinding the LR to its initial state at the end of each pruning cycle can improve pruning performance. In this paper, we go one step further by first providing a theoretical justification for the surprising effect of LR schedules. Next, we propose a LR schedule for network pruning called SILO, which stands for S-shaped Improved Learning rate Optimization. The advantages of SILO over existing LR schedules are two-fold: (i) SILO has a strong theoretical motivation and dynamically adjusts the LR during pruning to improve generalization. Specifically, SILO increases the LR upper bound (max_lr) in an S-shape. This leads to an improvement of 2% - 4% in extensive experiments with various types of networks (e.g., Vision Transformers, ResNet) on popular datasets such as ImageNet, CIFAR-10/100. (ii) In addition to the strong theoretical motivation, SILO is empirically optimal in the sense of matching an Oracle, which exhaustively searches for the optimal value of max_lr via grid search. We find that SILO is able to precisely adjust the value of max_lr to be within the Oracle optimized interval, resulting in performance competitive with the Oracle with significantly lower complexity.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: N/A
Code: https://github.com/Martin1937/SILO
Supplementary Material: pdf
Assigned Action Editor: ~Mingsheng_Long2
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1147
Loading