Keywords: Multi-robot systems, Edge–cloud computing, Task offloading, Deep reinforcement learning (DRL)
Abstract: Robots increasingly rely on computation-heavy tasks such as perception, simultaneous localization and mapping (SLAM), and motion planning. To address the limitations of onboard processors, edge–cloud computing enables robots to offload workloads to nearby edge servers or powerful remote cloud servers, thereby reducing computation delay and improving real-time performance. However, when multiple robots operate simultaneously, shared wireless bandwidth, time-varying channel conditions, and dynamic task arrivals make efficient task management a non-trivial problem [1]. In this paper, we propose a deep reinforcement learning (DRL) framework for joint computation offloading and bandwidth allocation in multi-robot systems. Each robot dynamically decides what portion of its tasks should be executed locally, offloaded to the edge, or forwarded to the cloud, while the available communication bandwidth is split between uplink and downlink transmissions. By modelling the system as a Markov decision process (MDP), the proposed DRL-based approach adaptively minimizes task response time under dynamic and uncertain conditions, offering an intelligent solution for real-time robotic task management in edge–cloud environments.
Submission Number: 37
Loading