A Survey of Sparse-Learning Methods for Deep Neural NetworksDownload PDFOpen Website

2018 (modified: 02 Feb 2023)WI 2018Readers: Everyone
Abstract: Deep neural networks (DNNs) has drawn considerable attention in recent years as a result of their remarkable performace in many visual and speech recognition assignments. As the scale of tasks that need to solve is increasingly big, the networks used also become wider and deeper, resulting in millions or even billions of parameters needed. Deep and wide networks with large number of parameters bring many problems, including memory requirement, computation cost and overfitting, which severely hinder the application of DNNs in practice. Therefore, a natural thought is to train sparse networks with less parameters and float operators while maintaining comparable performance. During past few years, a mass of research has been proposed in this area. In this paper, we survey sparsity-promoting techniques in DNNs proposed in recent years. These approaches are roughly divided into three categories, including pruning, randomly reducing the complexity and optimizing with sparse regularizer. Pruning techniques will be introduced first and others will be described in the following section. For each kind of methods, we present approaches in this category, strengths and drawbacks. In the final, we will discuss the relationship of these three categories of methods.
0 Replies

Loading