Modeling Virtual Reality Traffic with Head Movement in Remote Rendering

Published: 2025, Last Modified: 04 Nov 2025ICC 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The proliferation of virtual reality (VR) content, particularly in resource-intensive applications, has been met by remote rendering to overcome local hardware limitations. Along with numerous advantages, remote rendering VR brings about a new traffic type that features huge throughput and burstiness generally, the understanding and modeling of which is critical for performing VR networking optimization to guarantee the Quality of Experience (QoE) of VR traffic transmission, including synthetic traffic generation and Network Slicing orchestrators. However, existing VR traffic modeling studies are limited in that they do not consider the impact of user interactions on VR traffic. In contrast, we carry out extensive traffic measurements in this paper, and discover that head movements actively affect the VR frame sizes generated. We analyze traffic features and further model the relationship between angular velocities and frame sizes quantitatively. A linear regressor is modeled to predict the frame size by considering history frame sizes and angular velocities jointly. We use Air Light VR (ALVR) to stream VR content in various scenarios, construct the datasets, and validate our model on top of them. The result shows that the our model is capable of reducing the 95% square prediction error by 18-30 compared to the state-of-the-art model. To the best of our knowledge, this is the first investigation into the intricate relationship between remote rendering VR traffic and head movement. Our dataset and results will be publicly available and reproducible.
Loading