Abstract: Deep neural networks (DNNs) are costly to train. Pruning, an approach to
alleviate model complexity by zeroing out or pruning DNN elements, has
shown promise in reducing training costs for DNNs with little to no efficacy at a given task. This paper presents a novel method to perform early
pruning of DNN elements (e.g., neurons or convolutional filters) during the
training process while minimizing losses to model performance. To achieve
this, we model the efficacy of DNN elements in a Bayesian manner conditioned upon efficacy data collected during the training and prune DNN
elements with low predictive efficacy after training completion. Empirical evaluations show that the proposed Bayesian early pruning improves
the computational efficiency of DNN training while better preserving
model performance compared to other tested pruning approaches.
0 Replies
Loading