Enhancing Robustness in NIDS via Coverage Guided Fuzzing and Adversarial Training

Published: 2024, Last Modified: 08 Jan 2026WISA 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: AI-based Network Intrusion Detection Systems (NIDS) are essential for defending modern network environments against diverse traffic-based attacks. However, these systems, including the advanced Kitsune model which employs an ensemble of small autoencoders for real-time intrusion detection, are susceptible to adversarial attacks. Such attacks manipulate the AI models by introducing adversarial examples, potentially leading to system failure. This paper introduces a novel methodology that applies coverage-guided fuzzing, a technique traditionally used in software vulnerability detection, to uncover adversarial examples in NIDS models. Furthermore, we present an innovative approach of how to enhance robustness of these systems using adversarial training with the identified adversarial examples. We demonstrate the effectiveness of our approach by applying it to Kitsune-based models with two open-source benchmark datasets. The experimental results indicate that the defense rate against adversarial attacks has improved by up to 98%, proving that our methods not only detect critical vulnerabilities but also significantly strengthen the model’s defense mechanisms against such attacks.
Loading