Abstract: Eye movement is a new, highly secure biometric behavioral modality that has received increasing attention in recent years. Although deep neural networks, such as convolutional neural networks (CNNs), have recently achieved promising performance (e.g., achieving the highest recognition accuracy on the GazeBase database), current solutions fail to capture local and global temporal dependencies within eye movement data. To overcome this problem, we propose a mixed Transformer termed EmMixformer to extract time- and frequency-domain information for eye movement recognition in this article. To this end, we propose a mixed block consisting of three modules: a Transformer, attention long short-term memory (LSTM), and a Fourier Transformer. We are the first to attempt leveraging Transformers to learn long temporal dependencies in eye movement. Second, we incorporate the attention mechanism into the LSTM to propose attention LSTM (attLSTM) to learn short temporal dependencies. Third, we perform self-attention in the frequency domain to learn global dependencies and understand the underlying principles of periodicity. As the three modules provide complementary feature representations regarding local and global dependencies, the proposed EmMixformer can improve recognition accuracy. The experimental results on our eye movement dataset and two public eye movement datasets show that the proposed EmMixformer outperforms the state-of-the-art (SOTA) by achieving the lowest verification error. The EMg- lasses database is available at https://github.com/HonyuZhu-s/CTBU-EMglasses-database.
External IDs:dblp:journals/tim/QinZJSEG25
Loading