Sampling-Based Techniques for Training Deep Neural Networks with Limited Computational Resources: A Scalability Evaluation
Abstract: Deep neural networks are superior to shallow networks in learning complex representations. As such, there is fast-growing interest in utilizing them in large-scale settings. The training process of neural networks is already known to be time-consuming, and having a deep architecture only aggravates the issue. This process consists mostly of matrix operations, among which matrix multiplication is the bottleneck. Several sampling-based techniques have been proposed for speeding up the training time of deep neural networks by approximating the matrix products. These techniques fall under two categories: (i) sampling a subset of nodes in every hidden layer as active at every iteration and (ii) sampling a subset of nodes from the previous layer to approximate the current layer's activations using the edges from the sampled nodes. In both cases, the matrix products are computed using only the selected samples. In this paper, we evaluate the scalability of these approaches on CPU machines with limited computational resources. Making a connection between the two research directions as special cases of approximating matrix multiplications in the context of neural networks, we provide a negative theoretical analysis that shows feedforward approximation is an obstacle against scalability. We conduct comprehensive experimental evaluations that demonstrate the most pressing challenges and limitations associated with the studied approaches. We observe that the hashing-based node selection method is not scalable to a large number of layers, confirming our theoretical analysis. Finally, we identify directions for future research.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: Section 2 "Potential Benefits of Training DNN on CPU Machine" has been added.
Figure 5 and 6 have been updated.
New references are added to Introduction, Related Work, and Section 6.1: (Vanhoucke et al., 2011), (Yao
et al., 2023), (He et al., 2018), (Han et al., 2016), (Marinò et al., 2023), ( Gale et al., 2019), (Ma et al., 2019)
Assigned Action Editor: ~Charles_Xu1
Submission Number: 1270
Loading