Strategies for Meta-Learning with Diverse TasksDownload PDF

22 Apr 2022, 23:43 (edited 04 Jun 2022)MIDL 2022 Short PapersReaders: Everyone
  • Keywords: Meta-Learning
  • TL;DR: We explore different head initialization strategies for a gradient based meta-learning method for scenarios with variable number of target labels.
  • Abstract: A major limitation of deep learning for medical applications is the scarcity of labelled data. Meta-learning, which leverages principles learned from previous tasks for new tasks, has the potential to mitigate this data scarcity. However, most meta-learning methods assume idealised settings with homogeneous task definitions. The most widely used family of meta-learning methods, those based on Model-Agnostic Meta-Learning (MAML), require a constant network architecture and therefore a fixed number of classes per classification task. Here, we extend MAML to more realistic settings in which the number of classes can vary by adding a new classification layer for each new task. Specifically, we investigate various initialisation strategies for these new layers. We identify a number of such strategies that substantially outperform the naive default (Kaiming) initialisation scheme.
  • Registration: I acknowledge that acceptance of this work at MIDL requires at least one of the authors to register and present the work during the conference.
  • Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
  • Paper Type: novel methodological ideas without extensive validation
  • Primary Subject Area: Meta Learning
  • Secondary Subject Area: Detection and Diagnosis
  • Confidentiality And Author Instructions: I read the call for papers and author instructions. I acknowledge that exceeding the page limit and/or altering the latex template can result in desk rejection.
1 Reply

Loading