twCache: Thread-Wise Cache Management with High Concurrency Performance

Published: 2025, Last Modified: 22 Jan 2026ICDE 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Cache management is a critical concern for both key-value stores and relational DBMSs. The most significant challenge in cache management is the cache replacement strategy, which directly affects the throughput and latency of the cache manager. While the Least Recently Used (LRU) policy is widely adopted by many systems, it suffers from severe performance degradation in multi-threaded environments due to lock contention. This contention arises when multiple threads attempt to update the LRU list simultaneously. Motivated by this issue, we propose a new cache management scheme called twCache, designed to deliver high performance in concurrent environments. The novelty of twCache lies in two key aspects. First, it proposes to partition the replacement policy data structure into thread-wise sublists, each corresponding to one thread. Such a structure can enable thread isolation so that the requests from one thread will not introduce lock contention with other threads, yielding high concurrency performance. Second, we propose a low-cost technique to combine recency and hotness for victim selection during cache replacement. Each sublist is maintained as an LRU list, representing the recency of object requests. Each cached object's hot count is proposed to reflect its hotness, defined as the number of sublists visiting the object. We conducted extensive experiments to compare twCache with traditional algorithms (LRU, FIFO, and 2Q) and the state-of-the-art FrozenHot policy. Three types of trace are used, including 39 Twitter traces, 23 MSR traces, and 6 YCSB workloads. The results show that twCache achieves $12\times$ and $7\times$ higher throughputs than LRU on the Twitter and MSR traces, respectively. Meanwhile, twCache outperforms LRU by $4.8\times$ in the average throughput under YCSB workloads.
Loading