Keywords: Uncertainty Quantification, Calibration of ML models, Audio classificatoin, Disease Diagnosis, AI for Healthcare
TL;DR: Independent model to measure confidence in the predictions of output is necessary for crucial tasks such as healthcare diagnosis
Abstract: Deep learning excels in analyzing multi-modal signals for healthcare diagnostics but lacks the ability to quantify confidence in the predictions, which can lead to overconfident, erroneous diagnoses. In this work, we propose to predict model output independently and estimate the corresponding uncertainty. We present a unified audio-driven disease detection framework incorporating uncertainty quantification (UQ). This is achieved using a Dirichlet density approximation for model prediction and independent kernel distance learning in feature latent space for UQ. This approach requires minimum modifications to existing audio encoder architectures and is extremely parameter efficient compared to k-ensemble models. The uncertainty-aware model improves prediction reliability by producing confidence scores that closely match the accuracy values. Evaluations using the largest publicly available respiratory disease datasets demonstrate the advantage of the proposed framework in accuracy, training and inference time over ensemble and dropout methods.
The proposed model improves speech and audio analysis for medical diagnosis by identifying and calibrating uncertainties, enabling better decision-making and risk assessment. This is shown by high uncertainty scores at low model accuracy. The proposed model contributes to speech technologies for healthcare by enhancing model transparency and reliability.
Submission Number: 25
Loading