- Keywords: Multimodal TinyCNN, Scalable respiratory symptoms detection, Low power embedded, Audio detection
- TL;DR: Tiny RespNet, a scalable multimodal CNN software hardware architecture that can take audio recordings, speech information, and other sensor modalities as input to classify different respiratory symptoms.
- Abstract: Detection of respiratory symptoms has long been an area of extensive research to expedite the process of machine aided diagnosis for various respiratory conditions. This paper attempts to address the early diagnosis of respiratory conditions using low power scalable software and hardware involving end-to-end convolutional neural networks (CNNs). We propose Tiny RespNet, a scalable multimodal CNN software hardware architecture that can take audio recordings, speech information, and other sensor modalities belonging to patient demographic or symptom information as input to classify different respiratory symptoms. We analyze four different publicly available datasets and use them as case studies as part of our experiment to classify respiratory symptoms. With regards to fitting the network architecture to the hardware framework, we perform windowing, low bit-width quantization, and hyperparameter optimization on the software side. As per our analysis, detection accuracy goes up by 5% when patient demographic information is included in the network architecture. The hardware prototype is designed using Verilog HDL on Xilinx Artix-7 100t FPGA with hardware scalability extending to accommodate different numbers of processing engines for parallel processing. The proposed hardware implementation has a low power consumption of only 245~mW and achieves an energy efficiency of 7.3 GOPS/W which is 4.3 x better than the state-of the-art accelerator implementations. In addition, Tiny RespNet TensorFlow model is implemented on NVIDIA Jetson TX2 SoC (CPU + GPU) and compared to TX2 single-core CPU and GPU implementations to provide scalability in terms of off-the-shelf platform implementations.