Abstract: In recent years, intrusion detection system (IDS) based on machine learning (ML) algorithms has developed rapidly. However, ML algorithms are easily attacked by adversarial examples, and many attackers add perturbations to features of malicious traffic to escape ML-based IDSs. Unfortunately, most attack methods add perturbations without sufficient restrictions, generating unpractical adversarial examples. In this paper, we propose RAAM, a restricted adversarial attack model with adding perturbations to traffic features, which escapes ML-based IDSs. RAAM employs the improved loss to enhance the adversarial effect uses regularizer and masking vectors to restrict perturbations. Compared with previous work, RAAM can generate adversarial examples with superior characteristics: regularization, maliciousness and small perturbation. We conduct experiments on the well-known NSL-KDD dataset, and test on nine different ML-based IDSs. Experimental results show that the mean evasion increase rate (EIR) of RAAM is 94.1% in multiple attacks, which is 9.2% higher than the best of related methods, DIGFuPAS. Especially, adversarial examples generated by RAAM have lower perturbations, and the mean distance of perturbations ( $$L_{2}$$ ) is 1.79, which is 0.81 lower than DIGFuPAS. In addition, we retrain IDSs with adversarial examples to improve their robustness. Experimental results show that retrained IDSs not only maintain the ability of detection for original examples, but also are hard to be attacked again.
0 Replies
Loading