MetaTeacher: Coordinating Multi-Model Domain Adaptation for Medical Image ClassificationDownload PDF

Published: 31 Oct 2022, Last Modified: 15 Dec 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: Meta learning, Medical image classification, Domain adaptation
Abstract: In medical image analysis, we often need to build an image recognition system for a target scenario with the access to small labeled data and abundant unlabeled data, as well as multiple related models pretrained on different source scenarios. This presents the combined challenges of multi-source-free domain adaptation and semi-supervised learning simultaneously. However, both problems are typically studied independently in the literature, and how to effectively combine existing methods is non-trivial in design. In this work, we introduce a novel MetaTeacher framework with three key components: (1) A learnable coordinating scheme for adaptive domain adaptation of individual source models, (2) A mutual feedback mechanism between the target model and source models for more coherent learning, and (3) A semi-supervised bilevel optimization algorithm for consistently organizing the adaption of source models and the learning of target model. It aims to leverage the knowledge of source models adaptively whilst maximize their complementary benefits collectively to counter the challenge of limited supervision. Extensive experiments on five chest x-ray image datasets show that our method outperforms clearly all the state-of-the-art alternatives. The code is available at https://github.com/wongzbb/metateacher.
TL;DR: A framework MetaTeacher based on multi-teacher and one-student scheme is proposed to solve semi-supervised multi-source-free domain adaptation problem for medical image classification.
Supplementary Material: pdf
13 Replies

Loading