CrowdNet: Adaptive Collaborative Inference for Dynamic Mobile Intelligent Service

Published: 2025, Last Modified: 12 Jan 2026IEEE Trans. Mob. Comput. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep neural networks (DNNs) such as convolutional networks and Transformers are increasingly deployed to provide intelligent service. However, enabling large-scale DNNs in an infrastructure-less mobile crowd encounters the challenges of high resource consumption, poor performance, and low service availability. This paper proposes CrowdNet, a novel device-to-device (D2D) collaborative inference framework for dynamic mobile environments. CrowdNet introduces a CellNet architecture as its core component, designed as lightweight DNNs ideal for deployment and operation on resource-constrained mobile devices. The CellNets can perform inference tasks either independently or collaboratively to ensure robustness against network disruptions. A topology expansion method is utilized to create an inference flow from the physical communication topology, enabling the distributed operation of inference tasks. To handle the dynamic participation of mobile devices, CrowdNet employs fine-tuning adaptation for flexible assembly and collaborative inference. A reinforcement learning (RL)-based approach is introduced to optimize inference topology. Trained with a multi-objective optimization strategy, CrowdNet can enhance overall performance while maintaining individual CellNet functionality. Extensive experiments based on mobile network testbed and real-world datasets validate the effectiveness of CrowdNet on various intelligent tasks, exhibiting remarkable performance gains and robustness compared to state-of-the-art approaches.
Loading