Abstract: Contemporary data center CPUs are experiencing an unprecedented surge in core count. This trend necessitates scrutinized Last-Level Cache (LLC) strategies to accommodate increasing capacity demands. While DRAM offers significant capacity, using it as a cache poses challenges related to latency and energy. This paper introduces Native DRAM Cache (NDC), a novel DRAM architecture specifically designed to operate as a cache. NDC features innovative approaches, such as conducting tag matching and way selection within a DRAM subarray and repurposing existing precharge transistors for tag matching. These innovations facilitate Caching-In-Memory (CIM) and enable NDC to serve as a high-capacity LLC with high set-associativity, low-latency, high-throughput, and low-energy. Our evaluation demonstrates that NDC significantly outperforms state-of-the-art DRAM cache solutions, enhancing performance by $\mathbf{2.8 \%} / \mathbf{52.5 \%} / \mathbf{44.2 \%}$ (up to $8.4 \% / 140.6 \% / 85.5 \%$) in SPEC/NPB/GAP benchmark suites, respectively.
Loading