Abstract: Deep learning (DL) applications have rapidly evolved to address increasingly complex tasks by leveraging large-scale, resource-intensive models. However, deploying such models on low-power devices is not practical or economically scalable. While cloud-centric solutions satisfy these computational demands, they present challenges in terms of communication costs and latencies for real-time applications when every computation task is offloaded. To mitigate these concerns, hierarchical inference (HI) frameworks have been proposed, enabling edge devices equipped with small ML models to collaborate with edge servers by selectively offloading complex tasks. Existing HI approaches depend on immediate offloading of data upon selection, which can lead to inefficiencies due to frequent communication, especially in time-varying wireless environments. In this work, we introduce Batch HI, an approach that offloads samples in batches, thereby reducing communication overhead and improving system efficiency while achieving similar performance as existing HI methods. Additionally, we find the optimal batch size that attains a crucial balance between responsiveness and system time, tailored to specific user requirements. Numerical results confirm the effectiveness of our approach, highlighting the scenarios where batching is particularly beneficial.
Loading