Trustworthy Localized Corrections-guided Mutual Learning for Multi-View Learning

Published: 2025, Last Modified: 25 Jan 2026ICME 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Multi-view learning methods integrate multiple views to improve performance. Existing methods typically rely on highly reliable views, while neglecting the potential exploitation to not entirely reliable ones. In this paper, we propose a novel Trustworthy Localized Corrections-guided Mutual Learning(TLCML) method to address this limitation in a fine-grained manner. Our method maximizes the exploitation of complementarity across views through mutual learning guided by trustworthy localized corrections. To achieve this, we first employ evidential neural networks to capture view-specific opinions, including evidence and uncertainty, where evidence denotes the support for categories. Next, a coupled dual-branch strategy is applied at the evidential element level to explore both intra-and inter-opinions correlations. This strategy facilitates mutual learning among trustworthy localized decisions to mitigate noise interference. Extensive experiments demonstrate that TLCML achieves state-of-the-art performance on five benchmarks. The code is available at https://github.com/qiuranl/papercode.
Loading