Keywords: functional brain networks, fMRI, graph neural networks, Transformer, deep learning, graph pooling
Abstract: Functional brain networks (fNETs), typically derived from fMRI time series, have been widely studied for understanding demographic differences and neurodegenerative diseases. Recent years have seen an increasing adoption of deep learning methods, particularly graph neural networks (GNNs) and Transformers, for analyzing fNETs. Yet, the structural characteristics of fNETs remain poorly understood, and it is unclear whether these complex architectures consistently outperform simpler baselines. In this work, we conduct a systematic comparison of GNNs and Transformer-based models with baseline models across publicly available fNET datasets. We show that strong baseline models often match or exceed the performance of GNNs, while Transformers demonstrate more consistent gains. Our findings suggest that pooling mechanisms are a potential bottleneck for GNN performance. We argue that careful evaluation with simple baselines is crucial before attributing improvements to architectural sophistication.
Submission Number: 27
Loading