Surface Vision Transformers: Attention-Based Modelling applied to Cortical AnalysisDownload PDF

10 Dec 2021, 12:19 (modified: 22 Jun 2022, 18:47)MIDL 2022Readers: Everyone
Keywords: Vision Transformer, Cortical Analysis, Deep Learning, Neuroimaging, Attention-based Modelling
TL;DR: We adapted the vision transformer architecture to any surface data while projected onto a spherical manifold, and demonstrated its potential in the context of cortical analysis.
Abstract: The extension of convolutional neural networks (CNNs) to non-Euclidean geometries has led to multiple frameworks for studying manifolds. Many of those methods have shown design limitations resulting in poor modelling of long-range associations, as the generalisation of convolutions to irregular surfaces is non-trivial. Motivated by the success of attention-modelling in computer vision, we translate convolution-free vision transformer approaches to surface data, to introduce a domain-agnostic architecture to study any surface data projected onto a spherical manifold. Here, surface patching is achieved by representing spherical data as a sequence of triangular patches, extracted from a subdivided icosphere. A transformer model encodes the sequence of patches via successive multi-head self-attention layers while preserving the sequence resolution. We validate the performance of the proposed Surface Vision Transformer (SiT) on the task of phenotype regression from cortical surface metrics derived from the Developing Human Connectome Project (dHCP). Experiments show that the SiT generally outperforms surface CNNs, while performing comparably on registered and unregistered data. Analysis of transformer attention maps offers strong potential to characterise subtle cognitive developmental patterns.
Registration: I acknowledge that publication of this at MIDL and in the proceedings requires at least one of the authors to register and present the work during the conference.
Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
Paper Type: methodological development
Primary Subject Area: Detection and Diagnosis
Secondary Subject Area: Interpretability and Explainable AI
Confidentiality And Author Instructions: I read the call for papers and author instructions. I acknowledge that exceeding the page limit and/or altering the latex template can result in desk rejection.
Code And Data: The code is available at https://github.com/metrics-lab/surface-vision-transformers. Data is available at http://www.developingconnectome.org.
5 Replies

Loading