Abstract: Networked 360$^\circ$ video has become increasingly popular. Despite the immersive experience for users, its sheer data volume, even with the latest H.266 coding and viewport adaptation, remains a significant challenge to today's networks. Recent studies have shown that integrating deep learning into video coding can significantly enhance compression efficiency, providing new opportunities for high-quality video streaming. In this work, we conduct a comprehensive analysis of the potential and issues in applying neural codecs to 360$^\circ$ video streaming. We accordingly present $\mathsf {NETA}$, a synergistic streaming scheme that merges neural compression with traditional coding techniques, seamlessly implemented within an edge intelligence framework. To address the non-trivial challenges in the short viewport prediction window and time-varying viewing directions, we propose implicit-explicit buffer-based prefetching grounded in content visual saliency and bitrate adaptation with smart model switching around viewports. A novel Lyapunov-guided deep reinforcement learning algorithm is developed to maximize user experience and ensure long-term system stability. We further discuss the concerns towards practical development and deployment and have built a working prototype that verifies $\mathsf {NETA}$’s excellent performance. For instance, it achieves a 27% increment in viewing quality, a 90% reduction in rebuffering time, and a 64% decrease in quality variation on average, compared to state-of-the-art approaches.
Loading