TL;DR: We develop a copula variational inference framework for cross-model alignment
Abstract: Various data modalities are common in real-world applications. (e.g., EHR, medical images and clinical notes in healthcare). Thus, it is essential to develop multimodal learning methods to aggregate information from multiple modalities. The main challenge is appropriately aligning and fusing the representations of different modalities into a joint distribution. Existing methods mainly rely on concatenation or the Kronecker product, oversimplifying interactions structure between modalities and indicating a need to model more complex interactions. Additionally, the joint distribution of latent representations with higher-order interactions is underexplored. Copula is a powerful statistical structure in modelling the interactions between variables, as it bridges the joint distribution and marginal distributions of multiple variables. In this paper, we propose a novel copula modelling-driven multimodal learning framework, which focuses on learning the joint distribution of various modalities to capture the complex interaction among them. The key idea is interpreting the copula model as a tool to align the marginal distributions of the modalities efficiently. By assuming a Gaussian mixture distribution for each modality and a copula model on the joint distribution, our model can also generate accurate representations for missing modalities. Extensive experiments on public MIMIC datasets demonstrate the superior performance of our model over other competitors. The code is anonymously available at https://github.com/HKU-MedAI/CMCM.
Lay Summary: Many healthcare records contain information in different formats, such as medical images, time-series signals, and clinical notes. Combining these various types of data can help doctors better understand a patient’s condition and make more accurate predictions. However, it's not easy to merge information from different sources because each type of data has its own unique structure and meaning. Existing methods often use simple ways to combine data, which may ignore important interactions between them.
In this study, we introduce a new method called $\textbf{CM}^2$ (Cross-Modal alignment via variational Copula Modelling). This method uses a statistical approach known as a $\textit{copula}$ to better understand how different types of data relate to each other. By doing so, it builds a more accurate and flexible combined data representation. Even when some types of data are missing (which is common in real hospitals), $\textbf{CM}^2$ can still generate reliable predictions using the available information.
We tested $\textbf{CM}^2$ using real-world hospital datasets, including electronic health records and chest X-ray images. Our results showed that $\textbf{CM}^2$ outperformed other methods in predicting important outcomes such as whether a patient might pass away during their hospital stay or return shortly after discharge. This suggests that $\textbf{CM}^2$ could help build smarter healthcare systems that work well even with incomplete data.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Link To Code: https://github.com/HKU-MedAI/CMCM
Primary Area: Applications->Health / Medicine
Keywords: Copula, Multimodal learning, Missing modality, Healthcare
Submission Number: 5863
Loading