An Interpretable Representation Learning Approach for Diffusion Tensor Imaging

Published: 01 May 2025, Last Modified: 01 May 2025MIDL 2025 - Short PapersEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Diffusion Tensor Imaging, Autoencoders, Representation Learning
TL;DR: We present a 2D input representation and an interpretable representation learning methodology for Diffusion Tensor Imaging. This is a part of a late-fusion multi-modal model for Magnetic resonance Imaging.
Abstract: Diffusion Tensor Imaging (DTI) tractography offers detailed insights into the structural connectivity of the brain, but presents challenges in effective representation and interpretation in deep learning models. In this work, we propose a novel 2D representation of DTI tractography that encodes tract-level fractional anisotropy (FA) values into a 9$\times$9 grayscale image. This representation is processed through a Beta-Total Correlation Variational Autoencoder ($\beta$-TCVAE) to learn a disentangled and interpretable latent embedding. We evaluate the quality of this embedding using supervised and unsupervised representation learning strategies, including auxiliary classification, triplet loss, and SimCLR-based contrastive learning. Compared to the 1D Group deep neural network (DNN) baselines, our approach improves the F1 score in a downstream sex classification task by 15.74\% and shows a better disentanglement than the 3D representation.
Submission Number: 60
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview