Task-Driven Priority-Aware Computation Offloading Using Deep Reinforcement Learning

Published: 2025, Last Modified: 26 Jan 2026IEEE Trans. Wirel. Commun. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Computation offloading is an effective method for reducing the pressure put on networks and improving the service experience. However, most existing research on computation offloading is timeslot-driven and treats all tasks equally, resulting in decision waiting delays and failure to complete some important tasks. In this paper, we propose a novel priority-aware task-driven computation offloading model with system performance gain as the optimization objective based on a combination of task delay and energy consumption aspects. The new model is formulated as a Markov decision process (MDP). Considering the discrete-continuous hybrid action space of the optimization problem, we construct a dependence-aware latent space and propose a novel algorithm based on the Twin Delayed Deep Deterministic policy gradient algorithm (TD3). Additionally, we present the neural network structure and analyze the complexity of the algorithm. Extensive simulations show how our algorithm achieves superior performance compared to three state-of-the-art alternative approaches.
Loading