Multi-task multi-modal learning for joint diagnosis and prognosis of human cancersOpen Website

2020 (modified: 29 Sept 2021)Medical Image Anal. 2020Readers: Everyone
Abstract: Highlights • Consider the inherent correlation between diagnosis and prognosis tasks and propose a novel multi-task multi-modal learning framework for joint diagnosis and prognosis of human cancer. • Integrate histopathological image and genomic data for the diagnosis and prognosis of human cancers. • Conduct experiments on three cancer cohorts from the TCGA database that can validate the effectiveness of the proposed method. • In-depth explanation of the selected multi-modal biomarkers. Abstract With the tremendous development of artificial intelligence, many machine learning algorithms have been applied to the diagnosis of human cancers. Recently, rather than predicting categorical variables (e.g., stages and subtypes) as in cancer diagnosis, several prognosis prediction models basing on patients’ survival information have been adopted to estimate the clinical outcome of cancer patients. However, most existing studies treat the diagnosis and prognosis tasks separately. In fact, the diagnosis information (e.g., TNM Stages) indicates the extent of the disease severity that is highly correlated with the patients’ survival. While the diagnosis is largely made based on histopathological images, recent studies have also demonstrated that integrative analysis of histopathological images and genomic data can hold great promise for improving the diagnosis and prognosis of cancers. However, direct combination of these two types of data may bring redundant features that will negatively affect the prediction performance. Therefore, it is necessary to select informative features from the derived multi-modal data. Based on the above considerations, we propose a multi-task multi-modal feature selection method for joint diagnosis and prognosis of cancers. Specifically, we make use of the task relationship learning framework to automatically discover the relationships between the diagnosis and prognosis tasks, through which we can identify important image and genomics features for both tasks. In addition, we add a regularization term to ensure that the correlation within the multi-modal data can be captured. We evaluate our method on three cancer datasets from The Cancer Genome Atlas project, and the experimental results verify that our method can achieve better performance on both diagnosis and prognosis tasks than the related methods. Graphical abstract The framework of our study is consisted of three steps. Firstly, extracting imaging and eigengene features from the histopathological image and gene expression data, respectively. Secondly, implementing the proposed multi-task multi-modal feature selection algorithm (i.e., M2DP) to identify diagnosis and prognosis related features. Thirdly, based on the selected features of each patient, applying AdaBoosting and Cox proportional hazard models for the diagnosis and prognosis prediction of cancer patients, respectively. Download : Download high-res image (136KB) Download : Download full-size image
0 Replies

Loading