Exploring Low-Rank Adaptation for Session-Specific Fine-tuning of Pretrained Neural Population Models

07 May 2026 (modified: 09 May 2026)ICML 2026 Workshop CoLoRAI SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Low-Rank Adaptation, Neural Population Modeling, Low-rank Finetuning, Cross-Session Transfer, Spatiotemporal Transformer
TL;DR: We explore LoRA as a finetuning strategy for adapting pretrained neural population models to single sessions, and analyze how rank and trial budget jointly shape daptation performance.
Abstract: Neural population activity is shaped by latent dynamics that can be shared across sessions of the same behavioral task. This shared structure motivates joint pretraining across datasets, but adapting a pretrained model to a target session remains difficult when trials are scarce and recorded neurons differ across sessions. We study Low-Rank Adaptation (LoRA) for session-specific fine-tuning of a SpatioTemporal Neural Data Transformer (STNDT) pretrained across datasets of rat recording sessions. LoRA preserves the pretrained transformer backbone and updates only low-rank attention projections. Across trial budgets and ranks, LoRA remains competitive with full fine-tuning and matches or slightly exceeds it in the lowest-trial settings while using fewer trainable parameters. Performance saturates at a moderate rank, suggesting that restricted low-rank updates are effective for data-scarce neural adaptation.
Submission Number: 59
Loading