Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration

Zhongzhi Yu, Zheng Wang, Yonggan Fu, Huihong Shi, Khalid Shaikh, Yingyan Celine Lin

Published: 01 Jan 2024, Last Modified: 30 May 2025ICML 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Loading