Keywords: Self-supervised learning, Implicit Training, Magnetic Resonance Imaging, Bias Field Correction, Image Restoration
TL;DR: Implicit training offers a solution for image restoration tasks such as MRI bias field correction (where the data is inseparable into input and target) by allowing training on without known ground truth, resulting in a highly generalized model.
Abstract: In magnetic resonance imaging (MRI), bias fields are difficult to correct since they are inherently unknown. They cause intra-volume intensity inhomogeneities which limit the performance of subsequent automatic medical imaging tasks, e.g., tissue-based segmentation. Since the ground truth is unavailable, training a supervised machine learning solution requires approximating the bias fields, which limits the resulting method. We introduce implicit training which sidesteps the inherent lack of data and allows the training of machine learning solutions without ground truth. We describe how training a model implicitly for bias field correction allows using non-medical data for training, achieving a highly generalized model. The implicit approach was compared to a more traditional training based on medical data. Both models were compared to an optimized N4ITK method, with evaluations on six datasets. The implicitly trained model improved the homogeneity of all encountered medical data, and it generalized better for a range of anatomies, than the model trained traditionally. The model achieves a significant speed-up over an optimized N4ITK method—by a factor of 100, and after training, it also requires no parameters to tune. wFor tasks such as bias field correction—where ground truth is generally not available, but the characteristics of the corruption are known—implicit training promises to be a fruitful alternative for highly generalized solutions
Registration: I acknowledge that publication of this at MIDL and in the proceedings requires at least one of the authors to register and present the work during the conference.
Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
Paper Type: methodological development
Primary Subject Area: Unsupervised Learning and Representation Learning
Secondary Subject Area: Transfer Learning and Domain Adaptation
Confidentiality And Author Instructions: I read the call for papers and author instructions. I acknowledge that exceeding the page limit and/or altering the latex template can result in desk rejection.
Code And Data: The models in the paper have been trained on publicly available datasets (BrainWeb and ImageNet) and the implicitly trained model is available under: https://doi.org/10.5281/zenodo.3749526