Uncovering Time-Invariant Latent Representation for Brain Disorder Diagnosis via Self-Supervised Learning

20 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: self-supervised learning, brain network, brain disorder, diagnosis
Abstract: Recently, large-scale deep-learning models and datasets have shifted the development of medical image analysis with robust and generalizable representations. In this context, self-supervised learning has emerged as a valuable tool, offering the advantage of advancing deep learning without the need for costly annotations while facilitating downstream tasks with limited sample sizes. However, this feature has been few investigated in brain network analysis, and most existing self-supervised learning approaches yield only comparable performances with those achieved without self-supervised learning. In this study, we introduce an efficient self-supervised representation learning approach known as Bootstrap Time-Invariant Latent (BTIL), aiming at capturing time-invariant representations of brain networks derived from resting-state fMRIs for the diagnosis of brain disorders. We randomly dropped some timepoints in the functional signals and subsequently derived two augmented pseudo-functional connectivity (pFC) as positive pairs. Our BTIL consists of an online network and a target network, where each network encodes one augmented pFC. The time-invariant representations are obtained by bringing the latent embeddings of the two networks closer. Additionally, we employ Mask-ROI Modeling (MRM) with both classification and reconstruction heads for relating intra-network dependencies and enhancing regional specificity. Linear evaluations on three downstream classifications demonstrate the superiority of BTIL for brain disorder diagnosis with more than 2\% improvements compared with the state-of-the-art works.
Supplementary Material: zip
Primary Area: applications to neuroscience & cognitive science
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2169
Loading