Keywords: Sparse Networks, Pruning
Abstract: Scope of Reproducibility
For a fixed parameter count and compute budget, the proposed algorithm ($\textit{RigL}$) claims to directly train sparse networks that match or exceed the performance of existing dense-to-sparse training techniques (such as pruning). $\textit{RigL}$ does so while requiring constant Floating Point Operations (FLOPs) throughout training. The technique obtains state-of-the-art performance on a variety of tasks, including image classification and character-level language-modelling.
Methodology
We implement $\textit{RigL}$ from scratch in Pytorch using boolean masks to simulate unstructured sparsity. We rely on the description provided in the original paper, and referred to the authors' code for only specific implementation detail such as handling overflow in ERK initialization. We evaluate sparse training using $\textit{RigL}$ for WideResNet-22-2 on CIFAR-10 and ResNet-50 on CIFAR-100, requiring 2 hours and 6 hours respectively per training run on a GTX 1080 GPU.
Results
We reproduce $\textit{RigL}$'s performance on CIFAR-10 within $0.1\%$ of the reported value. On both CIFAR-10/100, the central claim holds---given a fixed training budget, $\textit{RigL}$ surpasses existing dynamic-sparse training methods over a range of target sparsities. By training longer, the performance can match or exceed iterative pruning, while consuming constant FLOPs throughout training. We also show that there is little benefit in tuning $\textit{RigL}$'s hyper-parameters for every sparsity, initialization pair---the reference choice of hyperparameters is often close to optimal performance.
Going beyond the original paper, we find that the optimal initialization scheme depends on the training constraint. While the Erdos-Renyi-Kernel distribution outperforms Random distribution for a fixed parameter count, for a fixed FLOP count, the latter performs better. Finally, redistributing layer-wise sparsity while training can bridge the performance gap between the two initialization schemes, but increases computational cost.
What was easy
The authors provide code for most of the experiments presented in the paper. The code was easy to run and allowed us to verify the correctness of our re-implementation. The paper also provided a thorough and clear description of the proposed algorithm without any obvious errors or confusing exposition.
What was difficult
Tuning hyperparameters involved multiple random seeds and took longer than anticipated. Verifying the correctness of a few baselines was tricky and required ensuring that the optimizer's gradient (or momentum) buffers were sparse (or dense) as specified by the algorithm. Compute limits restricted us from evaluating on larger datasets such as Imagenet.
Communication with original authors
We had responsive communication with the original authors, which helped clarify a few implementation and evaluation details, particularly regarding the FLOP counting procedure.
Paper Url: https://openreview.net/forum?id=NqEF-9d38i6&referrer=%5BML%20Reproducibility%20Challenge%202020%5D(%2Fgroup%3Fid%3DML_Reproducibility_Challenge%2F2020)
Supplementary Material: zip
3 Replies
Loading