AdvTG: An Adversarial Traffic Generation Framework to Deceive DL-Based Malicious Traffic Detection Models

Published: 29 Jan 2025, Last Modified: 29 Jan 2025WWW 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Security and privacy
Keywords: Malicious Traffic Detection, Adversarial Attacks, Large Language Model, Reinforcement Learning
TL;DR: This paper proposes an adversarial traffic generation framework based on the large language model and reinforcement learning to deceive DL-based malicious traffic detection models.
Abstract: Deep learning-based (DL-based) malicious traffic detection methods are effective but vulnerable to adversarial attacks. Existing adversarial attack methods have shown promising results when targeting traffic detection models based on statistics and sequence features. However, these methods are less effective against models that rely on payload analysis. The main reason is the difficulty in generating semantic, compliant, and functional payloads, which limits their practical application. In this paper, we propose AdvTG, an adversarial traffic generation framework based on the large language model (LLM) and reinforcement learning (RL). Specifically, AdvTG is designed to attack various DL-based detection models across diverse features and architectures, thereby enhancing the generalization capabilities of the generated adversarial traffic. Moreover, we design a specialized prompt for payload generation tasks, where functional fields and target types are supplied as input, while non-functional fields are generated to produce the mutated traffic. This fine-tuning endows the LLM with task comprehension and traffic pattern reasoning abilities, allowing it to generate traffic that remains compliant and functional. Furthermore, leveraging RL, AdvTG automatically selects traffic fields that exhibit more robust adversarial properties. Experimental results show that AdvTG achieves over 40\% attack success rate (ASR) across six detection models on four base datasets and two extended datasets, significantly outperforming other adversarial attack methods.
Submission Number: 534
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview