Information Transfer Across Clinical Tasks via Adaptive Parameter Optimisation

Published: 22 Jan 2025, Last Modified: 09 Mar 2025AISTATS 2025 OralEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: This paper presents Adaptive Parameter Optimisation (APO), a framework for optimizing shared models across tasks by dynamically learning task-specific and protected parameters, reducing conflicts and improving performance over traditional methods.
Abstract: This paper presents Adaptive Parameter Optimisation (APO), a novel framework for optimising shared models across multiple clinical tasks, addressing the challenges of balancing strict parameter sharing—often leading to task conflicts—and soft parameter sharing, which may limit effective cross-task information exchange. The proposed APO framework leverages insights from the lazy behaviour observed in over-parameterised neural networks, where only a small subset of parameters undergo any substantial updates during training. APO dynamically identifies and updates task-specific parameters while treating parameters associated with other tasks as protected, limiting their modification to prevent interference. The remaining unassigned parameters remain unchanged, embodying the lazy training phenomenon. This dynamic management of task-specific, protected, and unclaimed parameters across tasks enables effective information sharing, preserves task-specific adaptability, and mitigates gradient conflicts without enforcing a uniform representation. Experimental results across diverse healthcare datasets demonstrate that APO surpasses traditional information-sharing approaches, such as multi-task learning and model-agnostic meta-learning, in improving task performance.
Submission Number: 1234
Loading