Abstract: Graph Convolutional Networks (GCNs) have demonstrated significant success in population-based disease prediction. With the rise of multimodal technologies, multimodal GCNs integrate information from diverse data types, enhancing prediction accuracy, particularly in the fusion of imaging and non-imaging data. However, constructing a reliable population graph from limited multimodal data may result in poor generalization performance. To address this issue, we introduce graph contrastive learning as a multimodal data augmentation strategy, which reinforces the graph structure’s robustness to disturbances. We propose an Adaptive Composing Augmentation framework that first employs a learnable similarity network to iteratively compute node confidence. Subsequently, the framework selectively perturbs edges of lesser importance within the graph through methods such as edge removal and edge weight permutation. Extensive experiments on three challenging medical datasets demonstrate that our method achieves state-of-the-art performance, including an accuracy (ACC) of 87.95% and area under the curve (AUC) of 90.05% on the ABIDE dataset. These results significantly outperform the baseline models, with improvements of 7.12% and 5.07%, and surpass existing methods by 6.2% and 4.83%, respectively. This confirms that contrastive learning with structured augmentations effectively enhances the generalization ability of multimodal GCNs. The code is avaliable at https://github.com/drafly/ACA-GCN.
External IDs:dblp:journals/cee/HanWZLLH25
Loading