Quantum-Amplitude Embedded Adaptation for Parameter-Efficient Fine-Tuning in LLMs

Published: 29 Jul 2025, Last Modified: 29 Jul 2025PQAI 2025 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Quantum Computing, Large Language Model
TL;DR: Quantum-Amplitude Embedded Adaptation for Parameter-Efficient Fine-Tuning in LLMs
Abstract: Large language models (LLMs) require substantial resources for task-specific adaptation, that motivates the development of parameter-efficient fine-tuning (PEFT) methods. This paper presents quantum-amplitude embedded adaptation (QAA), a novel PEFT framework that logarithmically compresses activation vectors using quantum-amplitude embedding and applies expressive non-linear transformations via parameterized quantum circuits (PQCs). By replacing linear adapters in attention modules with compact quantum modules, QAA achieves high expressivity while drastically reducing the number of trainable parameters. Empirical results demonstrate that QAA performs on par with or better than existing PEFT under constrained memory and compute budgets, highlighting its potential for efficient LLM fine-tuning. (We want to have the paper as extended abstract, if accepted)
Submission Number: 13
Loading