Integrating Data Collection, Communication, and Computation for Importance-Aware Online Edge Learning Tasks

Published: 01 Jan 2025, Last Modified: 15 May 2025IEEE Trans. Wirel. Commun. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: With the prevalence of real-time intelligence applications, online edge learning (OEL) has gained increasing attentions due to the ability of rapidly accessing environmental data to improve artificial intelligence models by edge computing. However, the performance of OEL is intricately tied to the dynamic nature of incoming data in ever-changing environments, which does not conform to a stationary distribution. In this work, we develop a data importance-aware collection, communication, and computation integration framework to boost the training efficiency by leveraging the varying data usefulness under dynamic network resources. A model convergence metric (MCM) is firstly derived that quantifies the data importance in mini-batch gradient descent (MGD)-based online learning tasks. To expedite model learning at the edge, we optimize training batch configuration and fine-tune the acquisition of important data through coordinated scheduling, encompassing data sampling, transmission and computational resource allocation. To cope with the time discrepancy and complex coupling of decision variables, we design a two-timescale hierarchical reinforcement learning (TTHRL) algorithm decomposing the original problem into two-layer subproblems and separately optimize the subproblems in a mixed timescale pattern. Experiments show that the proposed data integration framework can effectively improve the online learning efficiency while stabilizing caching queues in the system.
Loading