[AML] BP-LLM: A Large Language Model-based Approach for Accurate and Adaptable Bandwidth Prediction

THU 2024 Winter AML Submission3 Authors

10 Dec 2024 (modified: 18 Dec 2024)THU 2024 Winter AML SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: bandwidth prediction, time series, large language models
Abstract: In the rapidly evolving streaming media landscape, accurate bandwidth prediction is crucial for optimizing user experience and resource utilization. This work introduces BP-LLM, a novel approach that leverages the capabilities of large language models (LLMs) to enhance bandwidth prediction. Traditional algorithms face significant limitations due to their reliance on historical data, inability to incorporate multimodal inputs, and challenges in generalization across diverse network conditions. BP-LLM addresses these challenges by employing the Transformer architecture to capture long-term dependencies in network traffic and integrating various input modalities—such as user location and communication latency—through text representations. Our method not only improves prediction accuracy but also demonstrates superior adaptability to new tasks and environments. Key contributions include the establishment of a comprehensive benchmark for evaluating bandwidth prediction algorithms across different application scenarios, the innovative application of LLMs to enrich feature representation and align textual and temporal data, and the demonstration of robust performance across various error metrics. The results indicate that BP-LLM outperforms state-of-the-art algorithms, providing reliable guidance for downstream tasks such as resource allocation and quality of service management. This advancement paves the way for more efficient network management strategies, enhancing the competitiveness of streaming media applications.
Submission Number: 3
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview