Reinforcement Learning-based Orchestration of XR applications in Distributed 6G Cloud Infrastructures

Javad Sameri, José Santos, Sam Van Damme, Susanna Schwarzmann, Qing Wei, Riccardo Trivisonno, Filip De Turck, Maria Torres Vega

Published: 2025, Last Modified: 02 Mar 2026CNSM 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Xtended Reality (XR) and holographic telepresence place stringent Quality of Service (QoS) demands on network infrastructure, requiring ultra-low latency, high throughput, and reliable connectivity. Meeting such QoS demands is critical in dynamic, distributed cloud environments, but does not always guarantee a satisfactory user experience. Quality of Experience (QoE) captures the user’s perception of service performance, which may be influenced by factors not fully reflected in systemlevel metrics. Thus, novel orchestration strategies must consider both QoS and QoE. This paper proposes a Reinforcement Learning (RL)-driven approach to edge-cloud orchestration capable of adapting to dynamic network conditions, leveraging a multiobjective reward function, including both QoS and QoE aspects, to guide service placement decisions. Evaluation shows that our RL approach reaches a 21.3% QoE gain over heuristics and 14.7% over balanced strategies, with 100% request acceptance. The results highlight the robustness and scalability of RL-driven orchestration, particularly for latency-sensitive 6 G applications. Our findings also reveal the limitations of traditional heuristics under complex objectives and highlight the potential of RL as a transformative tool for intelligent network and service management in next-generation communication systems.
Loading