Deepfakes for Histopathology Images: Myth or Reality?
Abstract: Deepfakes have become a major public concern on the Internet as fake images and videos could be used to spread misleading information about a person or an organization. In this paper, we explore if deepfakes can be generated for histopathology images using advances in deep learning. This is because the field of digital pathology is gaining a lot of momentum since the Food and Drug Administration (FDA) approved a few digital pathology systems for primary diagnosis and consultation in the United States. Specifically, we investigate if state-of-the- art generative adversarial networks (GANs) can produce fake histopathology images that can trick an expert pathologist. For our investigation, we used whole slide images (WSIs) hosted by The Cancer Genome Atlas (TCGA). We selected 3 WSIs of colon cancer patients and produced 100,000 patches of 256×256 pixels in size. We trained three popular GANs to generate fake patches of the same size. We then constructed a set of images containing 30 real and 30 fake patches. An expert pathologist reviewed these images and marked them as either real or fake. We observed that the pathologist marked 10 fake patches as real and correctly identified 34 patches (as fake or real). Thirteen patches were incorrectly identified as fake. The pathologist was unsure of 3 fake patches. Interestingly, the fake patches that were correctly identified by the pathologist, had missing morphological features, abrupt background change, pleomorphism, and other incorrect artifacts. Our investigation shows that while certain parts of a histopathology image can be mimicked by existing GANs, the intricacies of the stained tissue and cells cannot be fully captured by them. Unlike radiology, where it is relatively easier to manipulate an image using a GAN, we argue that it is a harder challenge in digital pathology to generate an entire WSI that is fake.
0 Replies
Loading