Learning Riemannian metric for disease progression modelingDownload PDF

21 May 2021, 20:44 (edited 28 Jan 2022)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: Riemannian Geometry, RKHS, mixed-effect model, Disease progression modelling, Longitudinal data
  • TL;DR: We propose a method to learn a Riemannian metric in the observation space to estimate disease trajectories from patient data. It allows to build interpretable disease progression models with higher predictive power than state-of-the-art.
  • Abstract: Linear mixed-effect models provide a natural baseline for estimating disease progression using longitudinal data. They provide interpretable models at the cost of modeling assumptions on the progression profiles and their variability across subjects. A significant improvement is to embed the data in a Riemannian manifold and learn patient-specific trajectories distributed around a central geodesic. A few interpretable parameters characterize subject trajectories at the cost of a prior choice of the metric, which determines the shape of the trajectories. We extend this approach by learning the metric from the data allowing more flexibility while keeping the interpretability. Specifically, we learn the metric as the push-forward of the Euclidean metric by a diffeomorphism. This diffeomorphism is estimated iteratively as the composition of radial basis functions belonging to a reproducible kernel Hilbert space. The metric update allows us to improve the forecasting of imaging and clinical biomarkers in the Alzheimer’s Disease Neuroimaging Initiative (ADNI) cohort. Our results compare favorably to the 56 methods benchmarked in the TADPOLE challenge.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code: zip
10 Replies