When 13 Parameters Aren't Enough: Task-Dependent Expressivity Floors in Ultra-Low-Parameter LLM Adaptation

22 Apr 2026 (modified: 09 May 2026)ICML 2026 Workshop CoLoRAI SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Parameter-Efficient Fine-Tuning, LoRA, TinyLoRA, Low-Rank Adaptation, Intrinsic Dimensionality, LLM Adaptation, Expressivity, Chess, PEFT
TL;DR: TinyLoRA's 13-parameter adapter works for math but fails completely on chess, revealing a task-dependent expressivity floor tied to base-model competence.
Abstract: Can a large language model be meaningfully adapted with just 13 trainable parameters? Recent work on TinyLoRA suggests yes: a single shared vector, projected through frozen random matrices, improves mathematical reasoning by 15 percentage points. But does this remarkable efficiency generalise across tasks, or does it depend on what the model already knows? We investigate this question by applying the same ultra-low-parameter adapter across three domains that span the full spectrum of base-model competence from mathematics, where the model already performs well, to chess puzzle solving, where it starts from near-zero. The result is a sharp dichotomy: TinyLoRA maintains or briefly improves accuracy on tasks within the model's existing competence, yet achieves exactly 0% on chess, a failure that persists across four orders of magnitude of parameter scaling, two training paradigms, and two model sizes. In contrast, standard LoRA at the same rank but with independent per-layer parameterisation reaches 36% accuracy on the same puzzles. These findings establish a prior-dependent expressivity floor: the effective intrinsic dimensionality of fine-tuning is not a fixed constant but varies with the gap between the base model's competence and the task's demands. Ultra-low-parameter adaptation succeeds when the task requires navigating existing knowledge and fails categorically when it demands qualitatively new computation.
Submission Number: 8
Loading