Sparsity Learning in Deep Neural NetworksDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Neural Networks, Deep Learning, Sparsity, Guided Attention
TL;DR: Proposing a novel method based on the guided attention to enforce the sparisty in deep neural networks.
Abstract: The main goal of network pruning is imposing sparsity on the neural network by increasing the number of parameters with zero value in order to reduce the architecture size and the computational speedup.
Code: https://drive.google.com/open?id=1GuS7nfgKUiWbnJKcGU_JIMYTfk4FfPSr
Original Pdf: pdf
5 Replies

Loading