Transfer Learning and Quantization for Efficient AP vs. LA X‑Ray View Classification on an Edge Device

12 Apr 2025 (modified: 12 Apr 2025)MIDL 2025 Short Papers SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Transfer Learning, Model Quantization, Edge Computing, Medical Image Classification
TL;DR: We combine transfer learning and model quantization to achieve real-time, high-accuracy AP vs. LA X-ray classification on a resource-limited edge device.
Abstract: In this paper, we present a framework for classifying X‑ray images as either anterior-posterior (AP) or lateral (LA) by combining transfer learning with model quantization to optimize deep learning models for deployment on an edge device. We perform transfer learning on a pre-trained MobileNetV2 using a dataset of 800 images (400 AP, 400 LA). We employ 5‑fold cross‑validation, where each fold has 640 images for training and 160 images for testing. Subsequently, we apply multiple quantization techniques including FP32, FP16, dynamic, and Int8 to reduce the model's size and enhance inference speed. Evaluating across the 5 folds (160 test images per fold), our evaluation showed that quantization preserves over 98% classification accuracy while reducing the original model size from 11.3 MB to as low as 2.6 MB for the quantized variants. We compare the performance on a personal computer (PC) with a graphical processing unit (GPU) and on an edge device. Although the GPU-based implementation exhibited lower warmup and steady-state inference times, the steady-state performance on the edge device remains competitive despite higher initialization overhead. Our results show that we can use transfer learning to leverage large-scale pre-trained models for specific applications. Our quantization strategies enable efficient, real-time AP/LA X‑ray view classification on an edge device, making it a promising solution for clinical use.
Submission Number: 98
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview