Extremely Sparse Deep Learning Using Inception Modules with DropfiltersDownload PDFOpen Website

2017 (modified: 31 Oct 2022)ICDAR 2017Readers: Everyone
Abstract: This paper reports a successful application of highly sparse convolutional network model for offline handwritten character recognition. The model makes use of spatial dropout techniques named dropfilters for sparsifying the inception modules in GoogLeNet, resulting in extremely sparse deep networks. The model is industry-deployable regarding model size and performance, which trained by a handwritten dataset of 520 classes and 260,000 Hangul(Korean) characters for tablet PCs and smartphones. The proposed model obtained significant improvement in recognition performance while the number of parameters is much smaller than that of the LeNet, a classical sparse convolutional network. We also evaluated the dropfiltered inception networks on the handwritten Hangul dataset and achieved 3.275% higher recognition accuracy with approximately three times fewer parameters than a deep network based on LeNet structure without dropfilters.
0 Replies

Loading