Abstract: Orthogonal variants of LoRA are typically justified as preserving the geometry of the low-rank adaptation subspace in parameter space. For retrieval embedding models, however, whose performance is evaluated directly in embedding space, it remains unclear whether this parameter-space geometry is itself what matters, or whether the decisive factor is the geometry induced in the resulting embeddings. We present a controlled study of 12 LoRA-family methods on implicit concept retrieval and two BEIR passage-retrieval tasks, combining retrieval metrics with weight-space and embedding-space diagnostics. The comparison shows that standard LoRA often collapses the effective rank of its update, but recovering effective rank alone does not reliably recover retrieval quality.
On our primary compact-encoder setting, the strongest retrieval results arise when two conditions are combined: the update remains orthogonal during training and its initial directions are aligned with the pretrained spectral subspace. Motivated by this finding, we introduce GeoLoRA, a minimal adapter that combines Stiefel-constrained factors, SVD-aligned initialisation, and a learnable diagonal spectral bridge. GeoLoRA improves over the main LoRA-family baselines in our primary ELSST setting, while the advantage weakens or disappears on less geometry-sensitive tasks and on the larger backbones we study. We clarifies when orthogonality helps retrieval, provides a controlled instantiation of the identified ingredients, and offers a diagnostic toolkit for future embedding-centric PEFT work.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Yannis_Kalantidis2
Submission Number: 8524
Loading