SACC: A Size Adaptive Content Caching Algorithm in Fog/Edge Computing Using Deep Reinforcement Learning

Published: 01 Jan 2022, Last Modified: 08 Apr 2025IEEE Trans. Emerg. Top. Comput. 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Edge/Fog caching is promising to mitigate the data traffic problem in both traditional wireline/wireless networks and the 5G network. Recently, deep reinforcement learning (DRL) has been adopted to provide a more powerful content caching policy. The current DRL-based scheme considers the requests for the same size and updates the caching for each request. However, the real-world data delivery systems usually refresh the content cache periodically, with different sizes of requests. To satisfy the real-world requirements, this study proposes a novel size adaptive content caching algorithm using DRL, termed SACC. SACC models the requests with random sizes and updates the cache after a batch of requests. Technically, SACC utilizes the Actor-Critic framework, which is able to process large discrete action space. SACC comprehensively considers the short-, medium- and long-term requests as the state to train the actor network. The reward is modeled as the cache hit rate. Once an action is selected from the policy network, it is expended to its k nearest neighbors. The critic network finds the action with the best reward from the k actions. The performance of the proposed SACC is evaluated through computer simulation. The experimental results showed that SACC could train the network much more efficiently and improve the cache hit rate by as much as 4% when comparing to the state-of-art DRL-based scheme.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview