AoI-Aware Inference Services in Edge Computing via Digital Twin Network Slicing

Published: 01 Jan 2024, Last Modified: 22 Jul 2025IEEE Trans. Serv. Comput. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The advance of Digital Twin (DT) technology sheds light on seamless cyber-physical integration with the Industry 4.0 initiative. Through continuous synchronization with their physical objects, DTs can power inference service models for analysis, emulation, optimization, and prediction on physical objects. With the proliferation of DTs, Digital Twin Network (DTN) slicing is emerging as a new paradigm of service providers for differential quality of service provisioning, where each DTN is a virtual network that consists of a set of inference service models with source data from a group of DTs, and the inference service models provide users with differential quality of services. Mobile Edge Computing (MEC) as a new computing paradigm shifts the computing power towards the edge of core networks, which is appropriate for delay-sensitive inference services. In this paper we consider Age of Information (AoI)-aware inference service provisioning in an MEC network through DTN slicing requests, where the accuracy of inference services provided by each DTN slice is determined by the Expected Age of Information (EAoI) of its inference model. Specifically, we first introduce a novel AoI-aware inference service framework of DTN slicing requests. We then formulate the expected cost minimization problem by jointly placing DT and inference service model instances, and develop efficient algorithms for the problem, based on the proposed framework. We also consider dynamic DTN slicing request admissions where requests arrive one by one without the knowledge of future arrivals, for which we devise an online algorithm with a provable competitive ratio for dynamic request admissions, assuming that DTs of all objects have been placed already. Finally, we evaluate the performance of the proposed algorithms through simulations. Simulation results demonstrate that the proposed algorithms are promising, and the proposed online algorithm improves the number of admitted requests by more than 6% than its counterpart.
Loading