Sparsity Learning in Deep Neural Networks

Sep 25, 2019 ICLR 2020 Conference Withdrawn Submission readers: everyone
  • Keywords: Neural Networks, Deep Learning, Sparsity, Guided Attention
  • TL;DR: Proposing a novel method based on the guided attention to enforce the sparisty in deep neural networks.
  • Abstract: The main goal of network pruning is imposing sparsity on the neural network by increasing the number of parameters with zero value in order to reduce the architecture size and the computational speedup.
  • Code: https://drive.google.com/open?id=1GuS7nfgKUiWbnJKcGU_JIMYTfk4FfPSr
0 Replies

Loading