H-Tuning: Toward Low-Cost and Efficient ECG-based Cardiovascular Disease Detection with Pre-Trained Models
TL;DR: This study allows for the utilization of pre-trained models with high computation efficiency and robust performance, exploring a path toward low-cost and efficient CVDs detection.
Abstract: Fine-tuning large-scale pre-trained models provides an effective solution to alleviate the label scarcity problem in cardiovascular diseases (CVDs) detection using electrocardiogram (ECG). However, as the pre-trained models scale up, the computational costs for fine-tuning and inference become unaffordable on low-level devices deployed for clinical applications. Additionally, maintaining the model performance under low budgets in computational resources remains a significant challenge. However, a comprehensive study that can address them in a joint framework is still lacking. Here, we propose a holistic method (H-Tuning) for low-cost and efficient fine-tuning of pre-trained models on downstream datasets. Then, the inference costs of the models fine-tuned by H-Tuning are further reduced significantly using a knowledge distillation technique. Experiments on four ECG datasets demonstrate that H-Tuning reduces the GPU memory consumption during fine-tuning by 6.34 times while achieving comparable CVDs detection performance to standard fine-tuning. With the knowledge distillation technique, the model inference latency and the memory consumption are reduced by 4.52 times and 19.83 times. As such, the proposed joint framework allows for the utilization of pre-trained models with high computation efficiency and robust performance, exploring a path toward low-cost and efficient CVDs detection. Code is available at https://github.com/KAZABANA/H-Tuning
Lay Summary: With the advancement of deep learning, researchers have started to use large pre-trained and foundation models for automatic cardiovascular diseases (CVDs) detection using electrocardiograms. They can alleviate the shortage of labeled data in clinical practice and achieve high detection performance. However, the computational costs required for fine-tuning and deploying them are too high for the low-level devices deployed for clinical applications. This presents a challenge: How can we leverage these foundation models without exceeding the limited computational resources available in healthcare environments?
We developed H-Tuning, a method that efficiently fine-tunes pre-trained models, drastically reducing the GPU memory required by over 6.34 times while preserving detection accuracy. Additionally, we employed knowledge distillation to minimize inference costs further, cutting latency and memory usage by nearly 19.8 times and 4.5 times, respectively.
Our research enables the deployment of large foundation models with high computational efficiency, exploring a path toward low-cost and efficient CVDs detection.
Link To Code: https://github.com/KAZABANA/H-Tuning
Primary Area: Applications->Health / Medicine
Keywords: Electrocardiograph, Pre-trained models, Fine-tuning, Cardiovascular diseases.
Submission Number: 14054
Loading