Building Trust in Decision with Conformalized Multi-view Deep Classification

Published: 20 Jul 2024, Last Modified: 03 Aug 2024MM2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Uncertainty-aware multi-view deep classification methods have markedly improved the reliability of results amidst the challenges posed by noisy multi-view data, primarily by quantifying the uncertainty of predictions. Despite their efficacy, these methods encounter limitations in real-world applications: 1) They are limited to providing a single class prediction per instance, which can lead to inaccuracies when dealing with samples that are difficult to classify due to inconsistencies across multiple views. 2) While these methods offer a quantification of prediction uncertainty, the magnitude of such uncertainty often varies with different datasets, leading to confusion among decision-makers due to the lack of a standardized measure for uncertainty intensity. To address these issues, we introduce Conformalized Multi-view Deep Classification (CMDC), a novel method that generates set-valued rather than single-valued predictions and integrates uncertain predictions as an explicit class category. Through end-to-end training, CMDC minimizes the size of prediction sets while guaranteeing that the set-valued predictions contain the true label with a user-defined probability, building trust in decision-making. The superiority of CMDC is validated through comprehensive theoretical analysis and empirical experiments on various multi-view datasets.
Primary Subject Area: [Content] Multimodal Fusion
Relevance To Conference: In this study, we introduce a novel multi-view deep learning approach termed conformalized multi-view deep classification (CMDC), aiming to enhance accuracy and reliability of the results amidst the prevalent uncertainties and inconsistencies commonly encountered in real-world environments. Unlike traditional multi-view methods, our model generates set-valued predictions and incorporates uncertain prediction as an explicit class. This innovative approach not only enhances the precision and reliability of multi-view analysis but also provides a framework for understanding and acting upon the uncertainty in a more nuanced manner. Through end-to-end training of the model, CMDC assures, with statistical confidence, that its set-valued predictions encompass the true label at a user-specified likelihood, all the while striving to reduce the size of these prediction sets. The efficacy of CMDC has been validated through comprehensive theoretical analysis and extensive empirical evaluations on various multi-view datasets, demonstrating its ability to overcome the key limitations of existing uncertainty-aware multi-view classification methods.
Supplementary Material: zip
Submission Number: 3298
Loading