Abstract: While multi-task learning (MTL) has been widely developed for natural image analysis, its potential for enhancing performance in medical imaging remains relatively unexplored. Most methods formulate MTL as a multi-objective problem, inherently forcing all tasks to compete with each other during optimization. In this work, we propose a novel approach by formulating MTL as a multi-level optimization problem, in which the features learned from one task are optimized by benefiting from the other tasks. Specifically, we advocate for a cooperative approach where each task considers the features of others, enabling individual performance enhancement without detriment to others. To achieve this objective, we introduce a novel optimization strategy aimed at seeking flat minima for each sub-problem, fostering the learning of robust sub-models resilient to changes in other sub-models. We demonstrate the advantages of our proposed method through comprehensive parameter and comparison studies on the OrganCMNIST dataset. Additionally, we evaluate its efficacy on three eye-related medical image datasets, comparing its performance against other state-of-the-art MTL approaches. The results highlight the superiority of our method over existing approaches, showcasing its potential for training multi-purpose models in medical image analysis.
Loading