Deep Reinforcement Learning-Based AI Task Offloading in Resource-Constrained IIoT Computing Environments
Abstract: As a key enabler of Industry 4.0, the Industrial Internet of Things (IIoT) has been rapidly advancing, driving the increasingly widespread adoption of artificial intelligence (AI) in industrial production. However, the high computational demands of AI tasks contrast sharply with the limited computing resources available in industrial environments, highlighting the need for efficient task offloading strategies. This paper addresses AI task offloading under resource-constrained IIoT scenarios by proposing LSTM-Enhanced Hierarchical-Classification Offloading with DQN (LHC-DQN). The proposed framework adopts a two-layer offloading architecture: the first layer performs global scheduling by assigning tasks to cloud, edge, or device resources, while the second layer refines the allocation. Cloud and edge nodes apply mathematical optimization, whereas device nodes leverage DRL agents for autonomous decision-making. To handle AI task heterogeneity, a classification-aware mechanism is introduced in the first layer, deploying separate DQN agents for inference and training tasks to improve adaptability and efficiency. Furthermore, an LSTM module is integrated into the DQN backbone to capture temporal dependencies in task states. Experimental results in a simulated environment demonstrate that LHC-DQN significantly outperforms traditional methods, increasing task completion rates from approximately 47% to 68%. Ablation and generalization tests further confirm the robustness and effectiveness of the proposed method. Overall, LHC-DQN offers a practical and efficient solution for intelligent task offloading and resource scheduling in IIoT environments.
External IDs:doi:10.1109/jiot.2025.3620126
Loading