Abstract: Artificial intelligence and in particular deep learning have shown great potential in the field of medical imaging. The models can be used to analyze radiology/pathology images to assist the physicians with their tasks in the clinical workflow such as disease detection, medical intervention, treatment planning, and prognosis to name a few. Accurate and generalizable deep learning models are in high demand but require large and diverse sets of data. Diversity in medical images means images collected at various institutions, using several devices and parameter settings from diverse populations of patients. Thus, producing a diverse data set of medical images requires multiple institutions to share their data. Despite the universal acceptance of Digital Imaging and Communications in Medicine (DICOM) as a common image storage format, sharing large numbers of medical images between multiple institutions is still a challenge. One of the main reasons is strict regulations on storage and sharing of personally identifiable health data including medical images. Currently, large data sets are usually collected with participation of a handful of institutions after rigorous de-identification to remove personally identifiable data from medical images and patient health records. De-identification is time consuming, expensive, and error prone and in some cases can remove useful information. Federated Learning emerged as a practical solution for training of AI models using large multi-institute data sets without a need for sharing the data, thereby removing the need for de-identification while satisfying necessary regulations. In this chapter, we present several examples of federated learning for medical imaging using IBM Federated Learning.
Loading