Abstract: In emotion recognition, the multimodal feature fusion approach for facial expression recognition is useful due to its versatility and adaptability. It leads to improved model performance by capturing information from different modalities. In this study, we employ feature-level fusion, integrating CNN and HOG features. To predict continuous valence and arousal values, we utilize a Feedforward neural network and Gradient Boosting. Performance evaluation is conducted using Mean Squared Error (MSE) and Root Mean Squared Error (RMSE). The paper presents experiments using the ADFES dataset, considering low, medium, and high intensities, as well as an augmented video dataset. The results shows that instead of relying on complex models, accuracy can be achieved by combining various types of features with appropriate hyperparameter settings and tuning. This approach is not only cost-effective in terms of computation but also robust and computationally efficient.
Loading