Integration of Patch Features Through Self-supervised Learning and Transformer for Survival Analysis on Whole Slide Images
Abstract: Survival prediction using whole slide images (WSIs) can provide guidance for better treatment of diseases and patient care. Previous methods usually extract and process only image features from patches of WSIs. However, they ignore the significant role of spatial information of patches and the correlation between the patches of WSIs. Furthermore, those methods extract the patch features through the model pre-trained on ImageNet, overlooking the huge gap between WSIs and natural images. Therefore, we propose a new method, called SeTranSurv, for survival prediction. SeTranSurv extracts patch features from WSIs through self-supervised learning and adaptively aggregates these features according to their spatial information and correlation between patches using the Transformer. Experiments on three large cancer datasets indicate the effectiveness of our model. More importantly, SeTranSurv has better interpretability in locating important patterns and features that contribute to accurate cancer survival prediction.
0 Replies
Loading