A novel modality contribution confidence-enhanced multimodal deep learning framework for multiomics data
Abstract: Multimodal learning for classification tasks has recently gained significant attention in bioinformatics. Current approaches primarily concentrate on devising efficient deep learning architectures to capture features within and across modalities. However, they typically assume that each modality contributes equally to the classification objective, overlooking inherent biases within multimodal learning. This paper presents a modality contribution confidence-enhanced deep learning framework to address this issue, resulting in an improved fusion space and improved classification performance on multiomics data. Specifically, we propose utilising a non-parametric Gaussian Process to assess the unimodal confidence of each modality and learn within-modality features. Additionally, we introduce the use of the Kullback-Leibler divergence to align multiple modalities and learn cross-modality features. Extensive experiments on four multiomics datasets, incorporating modalities such as static information, DNA, mRNA, miRNA, and protein data, validate the effectiveness of the proposed method. Furthermore, a case study on the blister recovery task is included to demonstrate the practical utility of our model.
External IDs:doi:10.1186/s12859-025-06219-9
Loading