Addressing Vulnerability in Medical Deep Learning through Robust Training

Published: 01 Jan 2023, Last Modified: 08 Oct 2025CAI 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep neural networks have been incorporated into healthcare for the purpose of diagnosing and detecting medical conditions. However, studies have shown that the vulnerability of neural networks to adversary and noise remains a pervasive problem that compromises trust of medical practitioners and accuracy in diagnosis, prognosis, and outcome prediction by such systems. In this study we show that robust training methods can help models perform more robustly against not only adversarial attacks, but also noises and calibration errors.
Loading