A survey on computation offloading in edge systems: From the perspective of deep reinforcement learning approaches
Abstract: Driven by the demand of time-sensitive and data-intensive applications, edge computing has attracted wide attention as one of the cornerstones of modern service architectures. An edge-based system can facilitate a flexible processing of tasks over heterogeneous resources. Hence, computation offloading is the key technique for systematic service improvement. However, with the proliferation of devices, traditional approaches have clear limits in handling dynamic and heterogeneous systems at scale. Deep Reinforcement Learning (DRL), as a promising alternative, has shown great potential with powerful high-dimensional perception and decision-making capability to enable intelligent offloading, but the great complexity in DRL-based algorithm design turns out to be an obstacle. In light of this, this survey provides a comprehensive view of DRL-based approaches to computation offloading in edge computing systems. We cover state-of-the-art advances by delving into the fundamental elements of DRL algorithm design with focuses on the target environmental factors, Markov Decision Process (MDP) model construction, and refined learning strategies. Based on our investigation, several open challenges are further highlighted from both the perspective of algorithm design and realistic requirements that deserve more attention in future research.
Loading