Abstract: Deep learning techniques, such as deep neural networks (DNNs), have been used with success in many viral classification problems associated with metagenomics, diagnosis of viral infections, pharmacogenomics, phylogenetic analysis, and others. However, deep learning algorithms require a large number of math operations, and these computations themselves can be a bottleneck for processing the vast number of virus sequences in a short time. Currently, most works in this area use basic DNNs in viral classification, and they are not optimized for computational efficiency. This paper proposes a novel training strategy that simultaneously minimizes both pruning and quantization losses in training compressed models for reducing deep learning computational complexity. In training a compressed convolutional neural network (CNN), the scheme uses weight quantization followed by pruning in each training iteration rather than the pruning followed by quantization. The proposed training strategy scheme has been applied to train compressed models for efficient viral classification of 1600 sequences of four types of viruses associated with three families and one realm. A substantial reduction of DNN weights (77%) and operations (58%) is demonstrated, while maintaining high classification accuracy. These results show that the proposed new training regime of weight quantization followed weight pruning for each training iteration is superior to conventional approaches with weight pruning epochs followed by weight quantization epochs.
0 Replies
Loading