Abstract: This paper presents how to construct a manageable neural network in terms of architecture and parameters with less accuracy loss to guarantee feasible model deployment on small footprint devices. We thus propose a practical dense-to-sparse learning method of using architecture rescaling and channel sparsification. From a dense detector with 7.2M parameters, we achieve promising sparse models with a compression rate of 81x-300x and an accuracy drop of 5.03%-21.26%. Our experimental result shows a high possibility of producing various variants of sparse networks for object detection.
0 Replies
Loading