Abstract:Multi-modality classification has flourished in recent years. Traditional methods mainly focus on advancing deep neural networks (DNN) to meet high performance. However, the interpretability of these methods remains blind due to the complexity and ambiguity of DNN, which also causes distrust. This problem is enlarged in sensitive areas, such as biomedical computing. Hence, we propose a novel dual trustworthy mechanism for multi-modality classification (DTMC), which can make the process and results of DNN more credible and interpretable while increasing performance. Specifically, a confidence attention mechanism is performed from local and global views to improve the process’ confidence by evaluating the attention scores and distinguishing the abnormal information. A confidence probability mechanism from local and global perspectives is conducted in the prediction stage to enhance the results’ confidence. Extensive experiments on multi-modality medical classification datasets show superior performance with the interpretability of the proposed method compared to the state-of-the-art (SOTA) methods. Our resources are open at https://github.com/ghh1125/data.