Self-Supervised Transformers for fMRI representationDownload PDF

07 Dec 2021, 09:45 (edited 06 Jul 2022)MIDL 2022Readers: Everyone
  • Keywords: fMRI, Transformers, Self-supervision.
  • Abstract: We present TFF, which is a Transformer framework for the analysis of functional Magnetic Resonance Imaging (fMRI) data. TFF employs a two-phase training approach. First, self-supervised training is applied to a collection of fMRI scans, where the model is trained to reconstruct 3D volume data. Second, the pre-trained model is fine-tuned on specific tasks, utilizing ground truth labels. Our results show state-of-the-art performance on a variety of fMRI tasks, including age and gender prediction, as well as schizophrenia recognition. Our code for the training, network architecture, and results is attached as supplementary material.
  • Registration: I acknowledge that publication of this at MIDL and in the proceedings requires at least one of the authors to register and present the work during the conference.
  • Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
  • Paper Type: both
  • Primary Subject Area: Unsupervised Learning and Representation Learning
  • Secondary Subject Area: Learning with Noisy Labels and Limited Data
  • Confidentiality And Author Instructions: I read the call for papers and author instructions. I acknowledge that exceeding the page limit and/or altering the latex template can result in desk rejection.
5 Replies

Loading