Joint Optimization of Device Placement and Model Partitioning for Cooperative DNN Inference in Heterogeneous Edge Computing

Published: 01 Jan 2025, Last Modified: 24 Mar 2025IEEE Trans. Mob. Comput. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: EdgeAI represents a compelling approach for deploying DNN models at network edge through model partitioning. However, most existing partitioning strategies have primarily concentrated on homogeneous environments, neglecting the effect of device placement and their inapplicability to heterogeneous settings. Moreover, these strategies often rely on either data parallelism or model parallelism, each presenting its own limitations, including data synchronization and communication overhead. This paper aims at enhancing inference performance through a pipeline system of devices through leveraging both parallel and sequential relationships among them. Accordingly, the problem of Multi-Device Cooperative DNN Inference is formulated by optimizing both device placement and model partitioning, taking into account the unique characteristics of heterogeneous edge resources and DNN models, with the goal of maximizing throughput. To this end, we propose an evolutionary device placement technique to determine the pipeline stage of devices by enhancing a variant of particle swarm optimization. Subsequently, an adaptive model partitioning strategy is developed by combining intra-layer and inter-layer model partitioning based on dynamic programming and the input-output mapping of DNN layers, respectively, to accommodate edge resource limitations. Finally, we construct a simulation model and a prototype, and the extensive results demonstrate that our proposed algorithm outperforms current state-of-the-art algorithms.
Loading