Abstract: Real-time video applications have received much attention in recent years. However, the perceived quality of real-time videos in many situations is far from ideal due to two major obstacles: noise in the video frames caused by limited camera hardware and low resolution caused by bandwidth-limited networks. A straightforward solution is a direct investment in photography and networking hardware, but it is obviously cost-ineffective and unscalable. We are motivated to develop an alternative solution by leveraging edge AI. We propose a new Real-time Edge-assist Video Enhancement (Real-EVE) framework. It includes two key designs: The video-enhancement deep neural network (VE-DNN), which jointly eliminates noise and super-resolves videos in real time with a small inference delay; and the video-enhancement-aware adaptive bitrate streaming (VEA-ABR), which adapts sending rate in response to changing network conditions to optimize the video quality posterior to video enhancement. We develop a real-world prototype of the proposed Real-EVE, demonstrating Real-EVE outperforms all benchmarks, and both the VE-DNN and VEA-ABR bring drastic performance gain.
Loading