Achieving flexible fairness metrics in federated medical imaging

Huijun Xing, Rui Sun, Jinke Ren, Jun Wei, Chun-Mei Feng, Xuan Ding, Zilu Guo, Yu Wang, Yudong Hu, Wei Wei, Xiaohua Ban, Chuanlong Xie, Yu Tan, Xian Liu, Shuguang Cui, Xiaohui Duan, Zhen Li

Published: 08 Apr 2025, Last Modified: 05 Nov 2025Nature CommunicationsEveryoneRevisionsCC BY-SA 4.0
Abstract: The rapid adoption of Artificial Intelligence (AI) in medical imaging raises fairness and privacy concerns across demographic groups, especially in diagnosis and treatment decisions. While federated learning (FL) offers decentralized privacy preservation, current frameworks often prioritize collaboration fairness over group fairness, risking healthcare disparities. Here we present FlexFair, an innovative FL framework designed to address both fairness and privacy challenges. FlexFair incorporates a flexible regularization term to facilitate the integration of multiple fairness criteria, including equal accuracy, demographic parity, and equal opportunity. Evaluated across four clinical applications (polyp segmentation, fundus vascular segmentation, cervical cancer segmentation, and skin disease diagnosis), FlexFair outperforms state-of-the-art methods in both fairness and accuracy. Moreover, we curate a multi-center dataset for cervical cancer segmentation that includes 678 patients from four hospitals. This diverse dataset allows for a more comprehensive analysis of model performance across different population groups, ensuring the findings are applicable to a broader range of patients. Achieving fairness while preserving privacy in medical imaging tasks remains a significant challenge. Here, the authors present and comprehensively evaluate a federated learning framework to tackle both fairness and privacy issues, using a flexible regularization term to integrate multiple fairness criteria.
Loading