APlaud: Adaptive Personalized Low-Rank Decomposition for User-Specific LLM

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Personalized Language Models, Parameter-Efficient Fine-Tuning (PEFT), Low-Rank Adaptation (LoRA), User-Specific Adaptation
Abstract: In this paper, we introduce and study the problem of \textit{personalized survey response prediction} using fine-tuned large language models (LLMs). This task poses unique challenges: limited per-user training data, scalability of model storage, and the need to exploit shared survey structures. To address these issues, we propose \textbf{APlaud} (Adaptive Personalized Low-rank and User-specific Nested Decomposition), a lightweight and scalable framework for LLM personalization. APlaud extends the LoRA paradigm by separating adaptation into a frozen, shared low-rank basis and a compact user-specific correction, augmented with a rank-one residual for finer personalization. To further reduce per-user parameter cost and mitigate overfitting, the correction matrix can be factorized into an even lower-rank form. Empirical results demonstrate that APlaud achieves efficient, scalable personalization across users while outperforming state-of-the-art LoRA-based personalized LLM approaches in both generalization and inference efficiency.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 14152
Loading