Abstract: The rapid adoption of video calls for remote work and virtual collaboration has highlighted the critical need for high-quality user experiences in real-time communication platforms. Traditional methods for assessing users' Quality of Experience (QoE) often rely on subjective user feedback, which can be inconsistent and difficult to quantify. This study proposes a novel approach that leverages facial expression analysis to predict network instabilities during video calls. Our methodology integrates facial expression data with network performance metrics to predict potential instabilities in real-time. Using facial expression data collected during video calls, we employed a multi-output Recurrent Neural Network (RNN) model to predict network impairments. Our results demonstrate that the model effectively correlates facial expression features with network impairments, achieving high accuracy rates, particularly in predicting video-related issues (for instance, 91.30% accuracy for video packet loss).
Loading