Partly Supervised Multi-Task LearningDownload PDFOpen Website

2020 (modified: 17 Sept 2021)ICMLA 2020Readers: Everyone
Abstract: Semi-supervised learning has recently been attracting attention as an alternative to fully supervised models that require large pools of labeled data. Moreover, optimizing a model for multiple tasks can provide better generalizability than single-task learning. Leveraging self-supervision and adversarial training, we propose a novel, general purpose semi-supervised, multiple-task model-namely, self-supervised, semi-supervised, multi-task learning (S <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">4</sup> MTL)-for accomplishing two important medical image analysis tasks: segmentation and diagnostic classification. Experimental results on chest and spine X-ray datasets confirm that our S <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">4</sup> MTL model significantly outperforms semi-supervised single-task, semi/fully-supervised multi-task, and fully-supervised single-task models, even with a 50% reduction in class and segmentation labels.
0 Replies

Loading