Foreground Extraction Based Facial Emotion Recognition Using Deep Learning Xception ModelDownload PDFOpen Website

Published: 01 Jan 2021, Last Modified: 11 Dec 2023ICUFN 2021Readers: Everyone
Abstract: The facial emotion recognition (FER) system has a very significant role in the autonomous driving system (ADS). In ADS, the FER system identifies the driver's emotions and provides the current driver's mental status for safe driving. The driver's mental status determines the safety of the vehicle and prevents the chances of road accidents. In FER, the system identifies the driver's emotions such as happy, sad, angry, surprise, disgust, fear, and neutral. To identify these emotions, the FER system needs to train with large FER datasets and the system's performance completely depends on the type of the FER dataset used in the model training. The recent FER system uses publicly available datasets such as FER 2013, extended Cohn-Kanade (CK+), AffectNet, JAFFE, etc. for model training. However, the model trained with these datasets has some major flaws when the system tries to extract the FER features from the datasets. To address the feature extraction problem in the FER system, in this paper, we propose a foreground extraction technique to identify the user emotions. The proposed foreground extraction-based FER approach accurately extracts the FER features and the deep learning model used in the system effectively utilizes these features for model training. The model training with our FER approach shows accurate classification results than the conventional FER approach. To validate our proposed FER approach, we collected user emotions from 9 people and used the Xception architecture as the deep learning model. From the FER experiment and result analysis, the proposed foreground extraction-based approach reduces the classification error that exists in the conventional FER approach. The FER results from the proposed approach show a 3.33% model accuracy improvement than the conventional FER approach.
0 Replies

Loading