Abstract: Edge Video Analytics (EVA) has become a major application of pervasive computing, enabling real-time visual processing. EVA pipelines, composed of deep neural networks (DNNs), typically demand efficient inference serving under stringent latency requirements, which is challenging due to the dynamic Edge environments (e.g., workload variability and network instability). Moreover, EVA pipelines face significant resource contention due to resource (e.g., GPU) constraints at the Edge. In this paper, we introduce OctopInf, a novel resource-efficient and workload-aware inference serving system designed for real-time EVA. OctopInf tackles the unique challenges of dynamic edge environments through fine-grained resource allocation, adaptive batching, and workload balancing between edge devices and servers. Furthermore, we propose a spatiotemporal scheduling algorithm that optimizes the co-location of inference tasks on GPUs, improving performance and ensuring service-level objectives (SLOs) compliance. Extensive evaluations on a real-world testbed demonstrate the effectiveness of our approach. It achieves an effective throughput increase of up to 10× compared to the baselines and shows better robustness in challenging scenarios. OctopInf can be used for any DNN-based EVA inference task with minimal adaptation and is available at https://github.com/tungngreen/PipelineScheduler.
External IDs:dblp:conf/percom/NguyenLTWCL24
Loading