Comparing Quantization Methods for On-Edge ECG Interpretation using Multi-Task CNN

Published: 19 Aug 2025, Last Modified: 24 Sept 2025BSN 2025EveryoneRevisionsBibTeXCC BY 4.0
Confirmation: I have read and agree with the IEEE BSN 2025 conference submission's policy on behalf of myself and my co-authors.
Keywords: ECG interpretation, Embedded Machine Learning, Multi-label Classification, Electrocardiogram, LiteRT
Abstract: Wearable devices have begun to incorporate machine learning models to assist with detection of various cardiac conditions. In this work, we developed a multi-task convolutional neural network to simultaneously predict 75 diagnostic, form and rhythm statements from 10-s duration, 12-lead ECGs. The model, originally developed off-line in TensorFlow, was converted to the FlatBuffers format for on-edge AI using the LiteRT toolset. Post-training quantization was used to compare different numerical precisions in terms of model size, model performance and inference time. Classifier performance for the 12-lead configuration was consistent between the 32-bit floating point model (“float32” baseline), the dynamic range quantized model (DR) and the float16 model (p=0.92) with an average macro AUC score of 0.893 with all output statements considered. A large degradation in classification performance was observed for 8-bit integer quantization (int8) which yielded an average macro AUC score of 0.513 for the 12-lead configuration across all statements. To address class imbalance, minority classes were removed. Reducing the number of statements to 41 classes increased macro F1 score by an average of 72.6% (to a mean value about 0.358) for the float32, float16 and DR quantized models.
Track: 3. Signal processing, machine learning, deep learning, and decision-support algorithms for digital and computational health
NominateReviewer: Kiriaki Rajotte, kjrajotte@wpi.edu
Submission Number: 28
Loading