Abstract: Recent advancements in Low Earth Orbit (LEO) satellites are facilitating the provision of Deep Neural Networks (DNNs)-inherent services to achieve ubiquitous coverage via satellite computing. However, the computational demands and energy consumption of DNN models present significant challenges for satellite computing with limited power and computation resources. Based on the layered characteristics of DNN models, a satellite-ground co-inference strategy has been introduced, which executes certain layers on satellites and the remaining layers on ground servers. Determining the optimal layers for in-orbit processing, however, is non-trivial due to the under-explored energy consumption of satellite computing across different models and restricted yet varying communication conditions of satellite-ground links. In this article, we first conduct a comprehensive measurement to uncover energy consumption of satellite computing across different layers and models. By summarizing the key observations, we develop a layer-specific energy consumption model tailored to diverse DNN architectures and kernels. We then investigate the energy-efficient satellite-ground co-inference problem and formulate it as an integer-nonlinear programming problem, which presents high computational complexity. To tackle these difficulties, we propose a satellite-ground co-inference algorithm that employs a branch-and-bound strategy, combined with the Sobol sequence and Lagrange multiplier, to reduce complexity and ensure stability across diverse DNN architectures. To evaluate the proposed algorithm, we conduct experiments based on real-world satellite parameters. The results demonstrate that our proposed algorithm can achieve an average energy savings of 96% under various data volumes compared to the existing benchmarks.
External IDs:dblp:journals/tsc/ChenZXLMZZW25
Loading