LearnAD: Learning Interpretable Rules for Brain Networks in Alzheimer’s Disease Classification

Published: 23 Sept 2025, Last Modified: 06 Dec 2025DBM 2025 Findings PosterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Learning interpretable rules to explain AD-affected brain networks
Abstract: We introduce LearnAD, a neuro-symbolic method for predicting Alzheimer’s disease from brain magnetic resonance imaging data, learning fully interpretable rules. LearnAD applies statistical models, Decision Trees, Random Forests, or GNNs to identify relevant brain connections, and then employs FastLAS to extract global rules. Our best instance outperforms Decision Trees, matches Support Vector Machine accuracy, and performs only slightly below Random Forests and GNNs trained on all features, all while remaining fully interpretable. Ablation studies show that our neuro-symbolic approach improves interpretability with comparable performance to pure statistical models. LearnAD demonstrates how symbolic learning can deepen our understanding of GNN behaviour in clinical neuroscience.
Length: long paper (up to 8 pages)
Domain: methods
Author List Check: The author list is correctly ordered and I understand that additions and removals will not be allowed after the abstract submission deadline.
Anonymization Check: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and URLs that point to identifying information.
Submission Number: 71
Loading