MSTI-Former: A Multi-Scale Spatial-Temporal Information Enhanced Transformer for Attentive State Classification

Published: 01 Jan 2024, Last Modified: 16 May 2025ISBI 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Attention is a cognitive process that is crucial in our daily lives, however, there is currently limited exploration of applying deep learning methods to investigate attentive states. This work introduces a novel Multi-scale Spatial-Temporal Information-enhanced Transformer (MSTI-Former) aimed at classifying different attentive states. We use the Multi-scale Temporal Information Enhanced (MTIE) module to extract temporal features and the spatial grouped convolution to extract spatial features. This network comprises three components: a spatial grouped convolution and the MTIE module to capture local information, a multi-head self-attention module to capture global information, and a classifier module to obtain classification results. Experimental results on a publicly available dataset show that our model achieves SOTA performance across 26 subjects, filling the gap in EEG-based attention state classification.
Loading