QoE-Driven Scheduling for Haptic Communications with Reinforcement learning

Published: 01 Jan 2022, Last Modified: 01 Oct 2024AsiaHaptics 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: With the rise of the Tactile Internet (TI) over 5G networks, haptic teleoperation systems have attracted extensive attentions as one of the key use cases of the TI. For a typical teleoperation setup, a human operator (i.e. the leader) interacts with a robot (i.e. the follower) in the remote environment with haptic input/output devices, where haptic information is bilaterally exchanged between them. Because of the human-in-the-loop nature of haptic teleoperation systems, the quality of experience (QoE) becomes an important performance indicator of the system. It is well known that the performance of a teleoperation system degrades when there exists communication latencies between the leader and the follower. As a result, how to gain the maximum overall QoE for teleoperation sessions sharing the same communication network becomes a huge challenge. In the presence of different communication latencies, different control schemes are applied to stabilize the teleoperation system. Since different control schemes have different sensitivities to the communication delay, most recently a QoE-delay model was developed to reveal the QoE performance of control schemes with respect to round-trip delays. In this paper, we take full advantage of the QoE-delay model, and propose a novel reinforcement learning based scheduling algorithm for haptic communications aiming at maximizing the overall QoE of all active sessions sharing the communication network. Simulation results confirm the efficiency of the proposed scheduling algorithm.
Loading