Learning to Re-think: Gated Recurrence with LoRA for Efficient and Effective Domain Incremental Learning

19 Sept 2025 (modified: 25 Sept 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Continual Learning, Adaptive Computation, Parameter-Efficient Fine-Tuning (PEFT), Foundation Models, Long-tail Learning, Medical Imaging
Abstract: The adaptation of large-scale foundation models for real-world medical Domain Incremental Learning (DIL) is challenged by data scarcity, significant domain shifts, and severe class imbalance. Current parameter-efficient methods often present a trade-off between knowledge integration, which risks task interference, and parameter isolation, which sacrifices forward transfer. To address this trade-off, we propose a framework that achieves both domain specialization and integrated knowledge transfer. Our two-tiered adaptive paradigm enables a foundation model to learn domain-specific representations while systematically transferring knowledge across a sequence of tasks. For intra-domain specialization, we introduce Recursive LoRA (RecLoRA), a dynamic computation module where a learnable router directs tokens for iterative feature refinement by a shared LoRA block, focusing computation on complex inputs. For inter-domain integration, our Sequential Knowledge Transfer strategy preserves domain-specific expertise by training independent RecLoRA modules for each task, while promoting forward transfer by using the converged weights of a previous task's modules to initialize those of the next. Built upon a frozen foundation model, our framework employs an efficient key-query mechanism for inference-time expert selection. We demonstrate that our approach sets a new state-of-the-art on challenging diabetic retinopathy DIL benchmarks, validating its efficacy for real-world clinical applications.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2026/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 20567
Loading