Abstract: Cervical cancer is one of the fastest growing and most dangerous cancers, seriously threatening women’s health and lives. Cervical cytopathology image classification is a very important approach for diagnosing cervical cancer. The advent of the automatic computer-aided diagnosis system can tackles this issue. However, cervical cell images of different classes exhibit similar appearances, posing a challenge for accurate classification. To address this challenge, this work proposes a framework named MSCCNet. In our MSCCNet, the cross-layer attention-based feature fusion module is used to obtain multi-scale discriminative features. Meanwhile, the spatial relationship modeling module is utilized to encode the relative relationship between objects and capture more slight differences between cervical cells, further strengthening the representation ability of features. We also introduce the joint loss to enhance the penalty for misclassified samples. The model training and evaluation are performed on our developed DSCC dataset and publicly available SIPaKMeD datasets. The proposed MSCCNet achieves overall accuracies of 87.88% and 97.90% on these two datasets, respectively, outperforming several existing classification methods.
Loading