On Adversarial Training with Incorrect Labels

Published: 01 Jan 2024, Last Modified: 11 Feb 2025WISE (4) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this work, we study adversarial training in the presence of incorrectly labeled data. Specifically, the predictive performance of an adversarially trained Machine Learning (ML) model trained on clean data and when the labels of training data and adversarial examples contain erroneous labels. Such erroneous labels may arise organically from a flawed labeling process or maliciously akin to a poisoning attacker.
Loading