Sparsity Learning in Deep Neural NetworksDownload PDF

25 Sep 2019 (modified: 24 Dec 2019)ICLR 2020 Conference Withdrawn SubmissionReaders: Everyone
  • Original Pdf: pdf
  • Keywords: Neural Networks, Deep Learning, Sparsity, Guided Attention
  • TL;DR: Proposing a novel method based on the guided attention to enforce the sparisty in deep neural networks.
  • Abstract: The main goal of network pruning is imposing sparsity on the neural network by increasing the number of parameters with zero value in order to reduce the architecture size and the computational speedup.
  • Code:
5 Replies