Abstract: The evolution of Artificial Intelligence (AI) holds colossal promise for automating tasks, driving innovation, and positively impacting industries. However, as AI becomes increasingly pervasive, it is crucial to establish trust in algorithmic decision-making, eliminating bias and ensuring fair outcomes. Adversarial attacks pose a significant threat to AI systems, with backdoor attacks emerging as a potent form of attack. Consequently, it becomes imperative to focus research efforts on mitigating discrimination and bias in AI models, emphasising ethical considerations and model interpretability. Adversarial machine learning (AML) is a critical field within AI that specifically addresses the vulnerabilities and countermeasures associated with the manipulation and subversion of machine learning models by adversarial actors. Previous research on AML focused on model vulnerabilities, attacks, and defences. However, the next phase of AML should incorporate human elements into AI design to address inherent social and demographic bias. The primary goal of this paper is to study to the field by conducting a systematic analysis of current trends, identifying future research directions, and proposing strategies to address and mitigate racial bias in generative image models. Furthermore, our aim is to foster social diversity and promote fairness in AI-generated outcomes.
Loading