DyNeMoC: A semi-supervised architecture for classifying time series brain dataDownload PDF

Published: 01 Mar 2023, Last Modified: 22 Apr 2023ICLR 2023 TSRL4H PosterReaders: Everyone
Keywords: Time series, brain modeling, neuroscience, VAE, Transformer, semi-supervised modeling, MEG data
TL;DR: We introduce DyNeMoC -- a semi-supervised Transformer-VAE for modelling temporal brain networks and classifying visual stimuli based on MEG time series data.
Abstract: Understanding how different regional networks of the brain get activated and how those activations change over time can help in identifying the onset of various neurodegenerative diseases, studying the efficacy of different treatment regimens for those illnesses, and developing brain-computer interfaces for patients with different types of disabilities. To explain dynamic brain networks, an RNN-VAE model named DyNeMo has recently been proposed. This model can take into account the whole recorded history of brain states while modeling their dynamics and is able to better capture the complexities in larger datasets than previous works. In this paper, we show that the latent representations learned by DyNeMo through unsupervised training are not sufficient for downstream classification tasks and propose a new semi-supervised model named DyNeMoC that overcomes this shortcoming. The downstream task we study is the classification of visual stimuli from MEG recordings. We show that both of our proposed variants of DyNeMoC --- DyNeMoC-RNN and DyNeMoC-Transformer --- lead to more useful latent representations for stimuli classification with the transformer variant outperforming the RNN one. Learning representations that are directly linked to a downstream task in this manner could ultimately be used to improve the monitoring and treatment of certain neurodegenerative diseases and building better brain-computer interfaces.
Anonymity: We agree to keep the submission (including any supplements and/or code) anonymous.
Formatting: We confirm that we read and complied with the author's instructions.
0 Replies

Loading