Efficient hierarchical Bayesian inference for spatio-temporal regression models in neuroimagingDownload PDF

Published: 09 Nov 2021, Last Modified: 05 May 2023NeurIPS 2021 PosterReaders: Everyone
Keywords: Multi-task Linear Regression, Hierarchical Bayesian Learning, Type-II Maximum-Likelihood Inference, Nonconvex Optimization, Majorization Minimization, Riemannian Geometry, Toeplitz, Circulant Embedding, Sparse Bayesian Learning, Sparse Models, Sparse Regression, Compressed Sensing, Sparse Denoising, Neuroimaging, EEG/MEG, Brain Source Imaging
TL;DR: We devise efficient hierarchical Bayesian algorithms with convergence guarantees for regression problems with spatio-temporal covariances by using ideas of g-convexity on Riemannian manifolds, circulant embeddings, and majorization-minimization.
Abstract: Several problems in neuroimaging and beyond require inference on the parameters of multi-task sparse hierarchical regression models. Examples include M/EEG inverse problems, neural encoding models for task-based fMRI analyses, and climate science. In these domains, both the model parameters to be inferred and the measurement noise may exhibit a complex spatio-temporal structure. Existing work either neglects the temporal structure or leads to computationally demanding inference schemes. Overcoming these limitations, we devise a novel flexible hierarchical Bayesian framework within which the spatio-temporal dynamics of model parameters and noise are modeled to have Kronecker product covariance structure. Inference in our framework is based on majorization-minimization optimization and has guaranteed convergence properties. Our highly efficient algorithms exploit the intrinsic Riemannian geometry of temporal autocovariance matrices. For stationary dynamics described by Toeplitz matrices, the theory of circulant embeddings is employed. We prove convex bounding properties and derive update rules of the resulting algorithms. On both synthetic and real neural data from M/EEG, we demonstrate that our methods lead to improved performance.
Supplementary Material: pdf
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Code: https://github.com/AliHashemi-ai/Dugh-NeurIPS-2021
11 Replies

Loading