LsiA3CS: Deep-Reinforcement-Learning-Based Cloud-Edge Collaborative Task Scheduling in Large-Scale IIoT

Published: 01 Jan 2024, Last Modified: 15 Nov 2024IEEE Internet Things J. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Task scheduling in large-scale Industrial Internet of Things (IIoT) is characterized by the presence of diverse resources and the requirement for efficient and synchronized processing across distributed edge clouds, raising a significant challenge. This article proposes a task scheduling framework across edge clouds, namely, LsiA3CS, which employs deep reinforcement learning (DRL) and heuristic guidance to achieve distributed, asynchronous task scheduling for large-scale IIoT. Specifically, the Markov game-based model and the asynchronous advantage actor–critic (A3C) algorithm are leveraged to orchestrate diverse computational resources, effectively balancing workloads and reducing communication latency. Moreover, the incorporation of heuristic policy annealing and action masking techniques further refines the adaptability of the proposed framework to the unpredictable requirements of large-scale IIoT systems. Real-world task data sets are utilized to conduct extensive experimental evaluations on a simulated large-scale multiedge cloud IIoT. The results shows that LsiA3CS significantly reduces task completion times and energy consumption while managing unpredictable task arrivals and variable resource capacities.
Loading