Double-Agent Deep Reinforcement Learning for Adaptive 360-Degree Video Streaming in Mobile Edge Computing Networks

Published: 01 Jan 2024, Last Modified: 29 Sept 2024INFOCOM (Workshops) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The emerging mobile edge computing (MEC) technology effectively enhances the wireless streaming performance of 360-degree videos. By predicting a user's field of view (Fo V) and precaching the video contents at the user's head-mounted device (HMD), it significantly improves the user's quality of experience (QoE) towards stable and high-resolution video playback. In practice, however, the performance of edge video streaming is severely degraded by the random Fo V prediction error and wireless channel fading effects. For this, we propose in this paper a novel MEC-enabled adaptive 360-degree video streaming scheme to maximize the user's QoE. In particular, we design a two-stage transmission protocol consisting of a video precaching stage that allows the edge server to send the video files to the user's HMD in advance based on a predicted Fo V,and a real-time remedial transmission stage that makes up for the missing files caused by Fo V prediction error. To achieve the maximum QoE, we formulate an online optimization problem that assigns the video encoding bitrates in the two stages on the fly. Then, we propose a double-agent deep reinforcement learning (DRL) framework consisting of two smart agents deciding the encoding bitrates in different time scales. Experiments based on real-world measurements show that the proposed two-stage DRL scheme can effectively mitigate Fo V prediction errors and maintain stable QoE performance under various practical scenarios.
Loading