Extracting common oscillatory time-courses from multichannel recordings: Oscillation Component AnalysisDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 12 May 2023IEEECONF 2022Readers: Everyone
Abstract: Simultaneously recorded non-invasive multichannel time-series such as electroencephalogram (EEG) and magnetoen-cephalogram (MEG) comes with a very specific challenge due to volume conduction and spatial mixing. This makes individual analysis of signal from each sensors, treating them independently, highly redundant and often leading to sub-optimal results that mischaracterizes spatial dependencies. There exists spatial filtering or blind source separation approaches that try to decompose multichannel EEG/MEG into a small number of dominant source time-courses by pooling information across channels. However, they mostly ignore the temporal (i.e. oscillatory) structure of neural data or rely on non-parametric methods that require substantial amounts of data. Here we propose a probabilistic parametric generative model where an unknown number of hidden oscillation sources undergo linear mixing to produce multichannel recordings. Under this model we provide a Bayesian inference procedure, to extract the oscillation source time-courses and their sensor level mixing while identifying the optimal number of oscillation sources via empirical Bayes model selection. Application of this method on simulated and real EEG data reveals its capability as an interpretable dimensionality reduction that provides explicit distribution of the neural oscillation time-courses over the scalp.
0 Replies

Loading