Trusted Multi-View Classification with Expert Knowledge Constraints

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 spotlightposterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Multi-view classification (MVC) based on the Dempster-Shafer theory has gained significant recognition for its reliability in safety-critical applications. However, existing methods predominantly focus on providing confidence levels for decision outcomes without explaining the reasoning behind these decisions. Moreover, the reliance on first-order statistical magnitudes of belief masses often inadequately capture the intrinsic uncertainty within the evidence. To address these limitations, we propose a novel framework termed Trusted Multi-view Classification Constrained with Expert Knowledge (TMCEK). TMCEK integrates expert knowledge to enhance feature-level interpretability and introduces a distribution-aware subjective opinion mechanism to derive more reliable and realistic confidence estimates. The theoretical superiority of the proposed uncertainty measure over conventional approaches is rigorously established. Extensive experiments conducted on three multi-view datasets for sleep stage classification demonstrate that TMCEK achieves state-of-the-art performance while offering interpretability at both the feature and decision levels. These results position TMCEK as a robust and interpretable solution for MVC in safety-critical domains. The code is available at https://github.com/jie019/TMCEK_ICML2025.
Lay Summary: Artificial intelligence (AI) is increasingly used in high-stakes areas like healthcare, where making reliable and understandable decisions is critical. One useful AI approach, called trusted multi-view classification, combines different types of data—such as signals from multiple brain sensors—to make reliable decisions like identifying a person’s sleep stage. However, current methods mainly focus on showing how confident they are in their decisions, without explaining why they made them. This makes it difficult for users, like doctors or patients, to understand or trust the results. Additionally, these methods measure uncertainty only based on the amount of evidence, which may lead to inaccurate uncertainty estimates. To address these problems, we developed a new framework called TMCEK. It has two key innovations: By guiding the model to focus on meaningful patterns—such as certain shapes in the signals, it becomes easier to understand why the system made a particular decision. It improves how uncertainty is measured by considering not just how much evidence is present, but also how that evidence is distributed—resulting in more trustworthy confidence scores. We tested TMCEK on three sleep-related datasets, and it outperformed existing methods both in accuracy and interpretability.
Link To Code: https://github.com/jie019/TMCEK_ICML2025
Primary Area: General Machine Learning->Supervised Learning
Keywords: multi-view classification, trusted multi-view classification, trusted fusion, distribution-aware subjective opinion
Submission Number: 850
Loading