Abstract: CNNs have become the standard for medical image interpretation, but concerns persist about their reliability in real-world applications. CNNs can be sensitive to small variations in image quality and vulnerable to adversarial attacks, potentially leading to inaccurate diagnoses. To address these issues, we introduce a novel scale-aware adaptive feature quantization approach. This enhances the robustness and reliability of CNNs by adaptively combining quantized representations from multiple scales, improving performance on low-quality or perturbed images. Our approach uses soft codes and dynamic weighting to adaptively combine features from different scales, creating a more informative final quantized representation. Experimental results on diverse medical datasets, including chest X-rays and dermatoscopic images, demonstrate the effectiveness of our approach. Our method significantly outperforms both standard CNNs and state-of-the-art approaches, with substantial gains across all metrics (AUC, F1 score). These improvements range from 2.6% to 11%, demonstrating our method’s superior performance and reliability for medical diagnosis in challenging real-world scenarios.
External IDs:doi:10.3233/faia250883
Loading