Abstract: Medical image analysis faces two critical challenges: scarcity of labeled data and lack of model interpretability, both hindering clinical AI deployment. Few-shot learning (FSL) addresses data limitations but lacks transparency in predictions. Active learning (AL) methods optimize data acquisition but overlook interpretability of acquired samples. We propose a dual-framework solution: Expert-Guided Explainable Few-Shot Learning (EG-FSL) and Explainability-Guided Active Learning (EG-AL). EG-FSL integrates radiologist-defined regions-of-interest as spatial supervision via Grad-CAM-based Dice loss, jointly optimized with prototypical classification for interpretable few-shot learning. EG-AL introduces iterative sample acquisition prioritizing both predictive uncertainty and attention misalignment, creating a closed-loop framework where explainability guides training and sample selection synergistically. We evaluate our framework on BraTS (MRI), VinDr-CXR (chest X-ray), and SIIM-COVID-19 (chest X-ray). EG-FSL achieves 92% accuracy on BraTS, 76% on VinDr-CXR, and 62% on SIIM-COVID, consistently outperforming non-guided baselines across all datasets. Under severe data constraints, EG-AL achieves 76% accuracy with only 680 samples versus 57% for random sampling. Grad-CAM visualizations demonstrate guided models focus on diagnostically relevant regions, with generalization validated on breast ultrasound confirming cross-modality applicability.
External IDs:doi:10.1109/jbhi.2025.3650334
Loading