{3L-Cache}: Low Overhead and Precise Learning-based Eviction Policy for Caches
Abstract: Caches can effectively reduce request latency and network
traffic, with the eviction policy serving as a core component.
The effectiveness of an eviction policy is measured by both the
byte miss ratio and the object miss ratio. To reduce these miss
ratios, various learning-based policies have been proposed.
However, the substantial computation overhead introduced by
learning limits their deployment in production systems.
This work presents 3L-Cache, an object-level learning policy with Low computation overhead, while achieving the
Lowest object miss ratio and the Lowest byte miss ratio
among learning-based policies. To reduce overhead, we introduce two key advancements. First, we propose an efficient
training data collection scheme that filters out unnecessary
historical cache requests and dynamically adjusts the training frequency without compromising accuracy. Second, we
design a low-overhead eviction method that integrates a bidirectional sampling policy to prioritize unpopular objects and
an efficient eviction strategy to effectively select evicted objects. Furthermore, we incorporate a parameter auto-tuning
method to enhance adaptability across traces.
We evaluate 3L-Cache in a testbed using 4855 traces. The
results show that 3L-Cache reduces the average CPU overhead by 60.9% compared to HALP and by 94.9% compared
to LRB. Additionally, 3L-Cache incurs only 6.4× the average
overhead of LRU for small cache sizes and 3.4× for large
cache sizes, while achieving the best byte miss ratio or object
miss ratio among twelve state-of-the-art policies.
Loading