Two-Layer FoV Prediction Model for Viewport Dependent Streaming of 360-Degree Videos

Published: 01 Jan 2018, Last Modified: 13 Nov 2024ChinaCom 2018EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: As the representative and most widely used content form of Virtual Reality (VR) application, omnidirectional videos provide immersive experience for users with 360-degree scenes rendered. Since only part of the omnidirectional video can be viewed at a time due to human’s eye characteristics, field of view (FoV) based transmission has been proposed by ensuring high quality in the FoV while reducing the quality out of that to lower the amount of transmission data. In this case, transient content quality reduction will occur when the user’s FoV changes, which can be improved by predicting the FoV beforehand. In this paper, we propose a two-layer model for FoV prediction. The first layer detects the heat maps of content in offline process, while the second layer predicts the FoV of a specific user online during his/her viewing period. We utilize a LSTM model to calculate the viewing probability of each region given the results from the first layer, the user’s previous orientations and the navigation speed. In addition, we set up a correction model to check and correct the unreasonable results. The performance evaluation shows that our model obtains higher accuracy and less undulation compared with widely used approaches.
Loading