CheXseg: Combining Expert Annotations with DNN-generated Saliency Maps for X-ray SegmentationDownload PDF

Published: 31 Mar 2021, Last Modified: 16 May 2023MIDL 2021Readers: Everyone
Keywords: Semi-Supervised Segmentation, Saliency Maps, Localization Performance
TL;DR: We develop CheXseg, a semi-supervised method for multi-pathology segmentation that leverages both the pixel-level expert annotations and the saliency maps generated by image classification models.
Abstract: Medical image segmentation models are typically supervised by expert annotations at the pixel-level, which can be expensive to acquire. In this work, we propose a method that combines the high quality of pixel-level expert annotations with the scale of coarse DNN-generated saliency maps for training multi-label semantic segmentation models. We demonstrate the application of our semi-supervised method, which we call CheXseg, on multi-label chest X-ray interpretation. We find that CheXseg improves upon the performance (mIoU) of fully-supervised methods that use only pixel-level expert annotations by 9.7% and weakly-supervised methods that use only DNN-generated saliency maps by 73.1%. Our best method is able to match radiologist agreement on three out of ten pathologies and reduces the overall performance gap by 57.2% as compared to weakly-supervised methods.
Registration: I acknowledge that publication of this at MIDL and in the proceedings requires at least one of the authors to register and present the work during the conference.
Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
Paper Type: both
Primary Subject Area: Segmentation
Secondary Subject Area: Application: Radiology
Source Code Url: https://github.com/stanfordmlgroup/CheXseg
Data Set Url: https://stanfordmlgroup.github.io/competitions/chexpert/
Source Latex: zip
10 Replies

Loading