Abstract: As a challenging pattern recognition task, automatic real-time emotion recognition based on multi-channel EEG signals is becoming an important computer-aided method for emotion disorder diagnose in neurology and psychiatry. Traditional machine learning approaches require to design and extract various features from single or multiple channels based on comprehensive domain knowledge. Consequently, these approaches may be an obstacle for non-domain experts. On the contrast, deep learning approaches have been used successfully in many recent literatures to learn features and classify different types of data. In this paper, baseline signals are considered and a simple but effective pre-processing method has been proposed to improve the recognition accuracy. Meanwhile, a hybrid neural network which combines 'Convolutional Neural Network (CNN)' and 'Recurrent Neural Network (RNN)' has been applied to classify human emotion states by effectively learning compositional spatial-temporal representation of raw EEG streams. The CNN module is used to mine the inter-channel correlation among physically adjacent EEG signals by converting the chain-like EEG sequence into 2D-like frame sequence. The LSTM module is adopted to mine contextual information. Experiments are carried out in a segment-level emotion identification task, on the DEAP benchmarking dataset. Our experimental results indicate that the proposed pre-processing method can increase emotion recognition accuracy by 32% approximately and the model achieves a high performance with a mean accuracy of 90.80% and 91.03% on valence and arousal classification task respectively.
0 Replies
Loading