Toggle navigation
OpenReview
.net
Login
×
Go to
DBLP
homepage
Simple linear attention language models balance the recall-throughput tradeoff
Simran Arora
,
Sabri Eyuboglu
,
Michael Zhang
,
Aman Timalsina
,
Silas Alberti
,
James Zou
,
Atri Rudra
,
Christopher Ré
Published: 01 Jan 2024, Last Modified: 03 Oct 2024
ICML 2024
Everyone
Revisions
BibTeX
CC BY-SA 4.0
Loading