Abstract: In this paper, we consider the problem of augmenting a set of histological images with adversarial examples to improve the robustness of neural network classifiers trained on the augmented set against adversarial attacks. In recent years, neural network methods have been developing rapidly, achieving impressive results. However, they are vulnerable to so-called adversarial attacks; i.e., they make incorrect predictions on input images with added carefully crafted imperceptible noise. Hence, the reliability of neural network methods remains an important area of research. In this paper, we compare different methods for training set augmentation to improve the robustness of neural histological image classifiers against adversarial attacks. For this purpose, we augment the training set with adversarial examples generated by several popular methods.
Loading