Lightweight Configuration Adaptation With Multi-Teacher Reinforcement Learning for Live Video Analytics
Abstract: The proliferation of video data and advancements in Deep Neural Networks (DNNs) have greatly boosted live video analytics, driven by the growing video capture capabilities of mobile devices. However, resource limitations necessitate the transmission of endpoint-collected videos to servers for inference. To meet real-time requirements and ensure accurate inference, it is essential to adjust video configurations at the endpoint. Traditional methods rely on deterministic strategies, posing difficulties in adapting to dynamic networks and video content. Meanwhile, emerging learning-based schemes suffer from trial-and-error exploration mechanisms, resulting in a concerning long-tail effect on upload latency. In this paper, we propose a novel lightweight and robust configuration adaptation policy (LCA), which fuses heuristic and RL-based agents using multi-teacher knowledge distillation (MKD) theory. First, we propose a content-sensitive and bandwidth-adaptive RL agent and introduce a Lyapunov-based optimization agent for ensuring latency robustness. To leverage both agents’ strengths, we design a feature-guided multi-teacher distillation network to transfer their advantages to the student. The experimental results across two vision tasks (pose estimation and semantic segmentation) demonstrate that LCA significantly reduces transmission latency compared to prior work (average reduction of 47.11%-89.55%, 95-percentile reduction of 27.63%-88.78%) and computational overhead while maintaining comparable inference accuracy.
Loading