Keywords: Mammography, Generative Adversarial Network, DenseNet, Detection, Segmentation, Classification
Abstract: Current deep learning based detection models tackle detection and segmentation tasks by casting them to pixel or patch-wise classification. To automate the initial mass lesion detection and segmentation on the whole mammographic images and avoid the computational redundancy of patch-based and sliding window approaches, the conditional generative adversarial network (cGAN) was used in this study. Subsequently, feeding the detected regions to the trained densely connected network (DenseNet), the binary classification of benign versus malignant was predicted. We used a combination of publicly available mammographic data repositories to train the pipeline, while evaluating the model's robustness toward our clinically collected repository, which was unseen to the pipeline.
Code Of Conduct: I have read and accept the code of conduct.
Remove If Rejected: Remove submission from public view if paper is rejected.