CheXseg: Combining Expert Annotations with DNN-generated Saliency Maps for X-ray SegmentationDownload PDF

Feb 04, 2021 (edited Feb 22, 2021)MIDL 2021 Conference SubmissionReaders: Everyone
  • Keywords: Semi-Supervised Segmentation, Saliency Maps, Localization Performance
  • TL;DR: We develop CheXseg, a semi-supervised method for multi-pathology segmentation that leverages both the pixel-level expert annotations and the saliency maps generated by image classification models.
  • Abstract: Medical image segmentation models are typically supervised by expert annotations at the pixel-level, which can be expensive to acquire. In this work, we propose a method that combines the high quality of pixel-level expert annotations with the scale of coarse DNN-generated saliency maps for training multi-label semantic segmentation models. We demonstrate the application of our semi-supervised method, which we call CheXseg, on multi-label chest X-ray interpretation. We find that CheXseg improves upon the performance (mIoU) of fully-supervised methods that use only pixel-level expert annotations by 9.7% and weakly-supervised methods that use only DNN-generated saliency maps by 73.1%. Our best method is able to match radiologist agreement on three out of ten pathologies and reduces the overall performance gap by 57.2% as compared to weakly-supervised methods.
  • Registration: I acknowledge that publication of this at MIDL and in the proceedings requires at least one of the authors to register and present the work during the conference.
  • Source Code Url: https://github.com/stanfordmlgroup/CheXseg
  • Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
  • Data Set Url: https://stanfordmlgroup.github.io/competitions/chexpert/
  • Paper Type: both
  • Source Latex: zip
  • Primary Subject Area: Segmentation
  • Secondary Subject Area: Application: Radiology
10 Replies

Loading