LLaMA-Adapter: Efficient Fine-tuning of Large Language Models with Zero-initialized Attention

Published: 01 Jan 2024, Last Modified: 13 May 2025ICLR 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Loading