Keywords: Continual learning, Medical Artificial Intelligence, Prompt-based Continual Learning, Catastrophic Forgetting
TL;DR: UniPrompt-CL is a prompt-based continual learning framework for healthcare that reduces forgetting with a unified prompt pool and regularization. It improves accuracy and F1 while cutting inference cost, enabling sustainable medical AI.
Abstract: Although modern AI models achieve state-of-the-art performance with large-scale datasets, strict ethical and institutional constraints in medicine make centralised learning nearly impossible. Institutions must therefore rely on local data, but traditional training methods quickly overfit new samples and suffer from catastrophic forgetting, making continual learning (CL) essential. While CL has advanced in the field of natural images, prompt-based continual learning (PCL) remains largely unexplored in the context of medical applications. We present UniPrompt-CL, the first PCL framework designed specifically for healthcare. Preliminary experiments show that existing PCL approaches perform poorly on medical datasets, which motivates our hypothesis that the prompt pool design needs to be more effective. UniPrompt-CL introduces a unified prompt pool with minimal expansion and a novel regularisation term, reducing computation while balancing stability and plasticity. On three diabetic retinopathy datasets (APTOS, DDR and DRD), UniPrompt-CL improves accuracy by at least 10% and the F1 score by 9 points compared to previous methods, while reducing the cost of inference. Additionally, it achieves superior performance on continual learning evaluation metrics. These results demonstrate that UniPrompt-CL lays the foundation for sustainable medical AI, enabling consistently high performance in distributed healthcare environments. To ensure reproducibility, the code and all training configurations can be found in this repository.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 5141
Loading