Towards lifting the trade-off between accuracy and adversarial robustness of deep neural networks with application on COVID 19 CT image classification and medical image segmentation
Abstract: Deep neural networks (DNNs) are vulnerable to adversarial noise. Adversarial training is a general strategy to improve DNN robustness. But training a DNN model with adversarial noises may result in a much lower accuracy on clean data. Towards lifting this trade-off, we propose an adversarial training method that generates optimal adversarial training samples. Our method is evaluated on four public medical datasets, with popular deep learning models on image classification and segmentation tasks. The results show that our method has the best robustness against adversarial noises and has a minimal accuracy degradation compared to the other defense methods.
Loading