Effective, Fast, and Memory-Efficient Compressed Multi-function Convolutional Neural Networks with Compact Inception-V4Download PDF

20 Oct 2018 (modified: 05 May 2023)NIPS 2018 Workshop CDNNRIA Blind SubmissionReaders: Everyone
Abstract: Google's Inception-V4 using an activation function RELU is a very deep convolutional neural network (CNN) that consists of 4 Inception-A blocks, 7 Inception-B blocks, and 3 Inception-C blocks. To improve classification performance, reduce training and testing times, and reduce power consumption and memory usage (model size), a new "Compressed Multi-function Inception-V4" (CMIV4) using different activation functions is created by using k Inception-A blocks, m Inception-B blocks, and n Inception-C blocks where k = {1, 2, 3, or 4}, m = {1, 2, 3, 4, 5, 6, or 7}, n = {1, 2, or 3}, and (k+m+n)<14. For performance analysis, two datasets for two different applications (classifying brain MRI images into one of the four stages of Alzheimer's disease and using a sample of CIFAR-10 data) are used to compare three CMIV4 architectures with Inception-V4 in terms of F1-score, training and testing times (related to power consumption), and memory usage (model size). Overall, simulations show that the new CMIV4 can outperform both the commonly used single-function CNN with Inception-V4 and multi-function CNNs with Inception-V4. In the future, other compressed multi-function CNNs, such as compressed multi-function ResNets and compressed multi-function DenseNets with a reduced number of convolutional blocks using different activation functions, will be developed to increase classification accuracy, reduce training and testing times, reduce computational power, and reduce memory usage (model size) for industrial applications in IoT, big data mining, green computing, etc.
2 Replies

Loading