Toggle navigation
OpenReview
.net
Login
×
Go to
DBLP
homepage
Prompt Cache: Modular Attention Reuse for Low-Latency Inference
In Gim
,
Guojun Chen
,
Seung-Seob Lee
,
Nikhil Sarda
,
Anurag Khandelwal
,
Lin Zhong
Published: 01 Jan 2024, Last Modified: 21 May 2025
MLSys 2024
Everyone
Revisions
BibTeX
CC BY-SA 4.0
Loading