Navigating Conflicting Views: Harnessing Trust for Learning

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Resolving conflicts is critical for improving the reliability of multi-view classification. While prior work focuses on learning consistent and informative representations across views, it often assumes perfect alignment and equal importance of all views, an assumption rarely met in real-world scenarios, as some views may express distinct information. To address this, we develop a computational trust-based discounting method that enhances the Evidential Multi-view framework by accounting for the instance-wise reliability of each view through a probability-sensitive trust mechanism. We evaluate our method on six real-world datasets using Top-1 Accuracy, Fleiss’ Kappa, and a new metric, Multi-View Agreement with Ground Truth, to assess prediction reliability. We also assess the effectiveness of uncertainty in indicating prediction correctness via AUROC. Additionally, we test the scalability of our method through end-to-end training on a large-scale dataset. The experimental results show that computational trust can effectively resolve conflicts, paving the way for more reliable multi-view classification models in real-world applications. Codes available at: https://github.com/OverfitFlow/Trust4Conflict
Lay Summary: (1) Problem: In real-world settings, decisions often need to be made based on multiple perspectives, or “views”—such as in healthcare diagnostics or autonomous driving. However, these views can sometimes conflict, and current multi-view classification methods typically assume that all views are equally reliable, which isn’t always true. (2) Solution: We introduce a computational trust-based method that learns to weigh each view based on how realiable its predictions are. By modeling trust as a dynamic, learnable value for each instance-view pair, our method adjusts how much influence each view has when fusing their predictions. This mechanism is built on a principled foundation using Subjective Logic and enhances existing Evidential Multi-view classification models. (3) Impact: Our approach not only improves classification accuracy but also boosts the consistency among views, especially when they disagree. Tested on six benchmark datasets and a large-scale real-world dataset, it consistently outperformed existing baselines. This has important implications for safety-critical domains, providing more reliable decision-making.
Link To Code: https://github.com/OverfitFlow/Trust4Conflict
Primary Area: General Machine Learning
Keywords: Multi-view Learning, Conflict Multi-view Learning, Reliable Multi-view Learning
Submission Number: 11511
Loading