Unveiling the Veil of Deception: An Insightful Journey into Adversarial Attacks and Defence Mechanisms in Deep Learning Networks

Published: 01 Jan 2023, Last Modified: 17 Sept 2024IC3I 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In recent years, a surge of interest has been witnessed in the realms of Artificial Intelligence (AI) driven technologies. Fields such as Computer Vision, Autonomous Navigation, Natural Language Processing, among others, have experienced transformative breakthroughs. Yet, alongside these advancements, a susceptibility to adversarial perturbations has been unveiled. Machine Learning models, despite their sophistication, can be deceived into making erroneous inferences by insubstantial manipulations in the input, a situation recognized as adversarial attacks. These attacks pose significant limitations to the applicability of AI technology in critical security-conscious sectors. Consequently, fortifying AI systems against such malevolent attempts has emerged as a cardinal component in the evolution of AI. This research paper pivots on devising a model that safeguards against adversarial onslaughts, concurrently equipping users with the tools necessary to cultivate robust models. The long-term goal is not only to enhance the reliability of AI but also to further the responsible and secure use of AI in society.
Loading