- Keywords: fMRI, encoding, autoencoder, deep learning
- TL;DR: A deep autoencoder consisting of CNN in tandem with LSTM, to predict the entire brain volume rather than a small subset of voxels from the information in stimuli
- Abstract: Encoding models of functional magnetic resonance imaging (fMRI) data attempt to learn a forward mapping that relates stimuli to the corresponding brain activation. Computational tractability usually forces current encoding as well as decoding solutions to typically consider only a small subset of voxels from the actual 3D volume of activation. Further, while brain decoding has received wider attention, there have been only a few attempts at constructing encoding solutions in the extant neuroimaging literature. In this paper, we present a deep autoencoder consisting of convolutional neural networks in tandem with long short-term memory (CNN-LSTM) model. The model is trained on fMRI slice sequences and predicts the entire brain volume rather than a small subset of voxels from the information in stimuli (text and image). We argue that the resulting solution avoids the problem of devising encoding models based on a rule-based selection of informative voxels and the concomitant issue of wide spatial variability of such voxels across participants. The perturbation experiments indicate that the proposed deep encoder indeed learns to predict brain activations with high spatial accuracy. On challenging universal decoder imaging datasets, our model yielded encouraging results.