Abstract: Recently, in response to the low efficiency and high transmission latency of traditional centralized content delivery networks, especially in congested scenarios, edge caching has emerged as a promising method to bring content caching closer to the edge of the network. However, traditional content delivery methods might still lead to low utilization of cache resources. To tackle this challenge, this paper investigates a content recommendation-based edge caching method in multi-tier edge-cloud networks while considering content delivery and cache replacement decisions as well as bandwidth allocation strategies. First, we consider a multi-tier edge caching-enabled content delivery network architecture combined with a content recommendation system and formulate the optimization problem with the objective of minimizing long-term content delivery delay and maximizing cache hit rate. Second, considering time-varying system environments and uncertain content demands, we approximate the optimization process of content delivery and cache replacement for each agent as a Partially Observable Markov Decision Process (POMDP) and propose a single-agent Deep Deterministic Policy Gradient (DDPG)-based method. Subsequently, we extend the POMDP to a multi-agent scenario. To address the issue of agents converging to local optima and establish more personalized models, we propose a Federated Distributed DDPG-based method (FD3PG) to solve the corresponding problem in a multi-agent system. Finally, simulation results demonstrate that the proposed FD3PG achieves lower delivery delay and higher cache hit rate compared with other baselines in various scenarios. Specifically, compared with FADE, MADRL, and DDPG, FD3PG achieves a significant decrease in average delivery delay, approximately 10%, 11%, and 35% on the Synthetic dataset, and 12%, 14%, and 48% on the MovieLens Latest Small dataset, respectively.
Loading