Keywords: Continual Learning, Parameter-efficient Finetuning, Quantum-Inspired
TL;DR: The first CL framework that enables non-frozen LoRA adapters through quantum-inspired mechanisms, achieving backward knowledge transfer without catastrophic forgetting
Abstract: Continual learning with parameter-efficient methods like LoRA prevents catastrophic forgetting but sacrifices cross-task knowledge transfer by freezing previous adapters. We recognize this mirrors quantum mechanics: how can multiple states coexist and interact? Qu-LoRA models task-specific LoRA adapters as quantum states in superposition, translating three quantum principles into concrete mechanisms: (1) superposition enables task coexistence through phase-controlled interference; (2) entanglement determines gradient sharing between related tasks while protecting unrelated ones; (3) measurement collapse eliminates task identity requirements, where inputs naturally select relevant knowledge through interference patterns. Unlike frozen approaches, Qu-LoRA achieves the impossible: previous tasks improve from subsequent learning while reducing forgetting by 75\%. Experiments demonstrate superior performance across benchmarks, establishing quantum mechanics as a powerful CL framework.
Serve As Reviewer: ~Xiaobing_Yu1
Submission Number: 24
Loading