PaRaChute: Pathology-Radiology Cross-Modal Fusion for Missing-Modality-Robust Survival Prediction

Published: 04 Mar 2026, Last Modified: 25 Mar 2026IEEE/CVF Winter Conference on Applications of Computer VisionEveryoneWM2024 Conference
Abstract: Survival prediction from medical imaging is a critical challenge in computational oncology, with high clinical relevance for patient stratification and treatment planning. However, current Deep Learning methods suffer from three core limitations: they assume complete modality availability, overlook local-to-global cross-modal interactions, and disregard modality-specific signal reliability during optimization. To address these issues, we introduce PaRaChute, a novel Deep Learning framework for robust multimodal survival prediction from heterogeneous and partially missing imaging data. PaRaChute integrates modality-specific pretrained encoders with adapter networks that align radiology and histopathology features into a shared latent space. A Dynamic Contextual Embedding mechanism captures biologically grounded local correlations between pathology and radiology and channels them through a multi-head cross-attention fusion module to guide global survival prediction, while adaptively handling missing modality scenarios. Furthermore, a Gradient Curvature Steering module improves convergence in incomplete data regimes by regularizing gradients via local curvature alignment. Experiments on three CPTAC and TCGA derived cancer cohorts show that PaRaChute achieves a C-index of 0.8367 with full modality input, and it retains strong performance under missing modality conditions (0.7488) while producing clinically meaningful risk stratifications, as confirmed by Kaplan-Meier analysis.
Loading