Interpreting neural decoding models using grouped model relianceDownload PDFOpen Website

Published: 01 Jan 2020, Last Modified: 08 Feb 2024PLoS Comput. Biol. 2020Readers: Everyone
Abstract: Author summary Modern machine learning algorithms currently receive considerable attention for their predictive power in neural decoding applications. However, there is a need for methods that make such predictive models interpretable. In the present work, we address the problem of assessing which aspects of the input data a trained model relies upon to make predictions. We demonstrate the use of grouped model reliance as a generally applicable method for interpreting neural decoding models. Illustrating the method on a case study, we employed an experimental design in which a comparably small number of participants (10) completed a large number of trials (972) over three electroencephalography (EEG) recording sessions from a Sternberg working memory task. Trained decoding models consistently relied on predictor variables from the alpha frequency band, which is in line with existing research on the relationship between neural oscillations and working memory. However, our analyses also indicate large inter-individual variability with respect to the relation between activity patterns and working memory load in frequency and topography. We argue that grouped model reliance provides a useful tool to better understand the workings of (sometimes otherwise black box) decoding models.
0 Replies

Loading