SAFE-NID: Self-Attention with Normalizing-Flow Encodings for Network Intrusion Detection

TMLR Paper3600 Authors

30 Oct 2024 (modified: 31 Oct 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Machine learning models are increasingly being adopted to monitor network traffic and detect network intrusion. In this paper, we develop a deep learning architecture for traffic monitoring at the packet level. Current network intrusion detection models typically struggle when faced with zero-day attacks and concept drift. We introduce SAFE-NID, a novel approach to address these in practical deployments---in particular, with high efficacy and low latency. Contrasting with previous work, we train a relatively lightweight encoder-only transformer architecture for the packet classification task. Our deep learning framework introduces a normalizing flows safeguard that quantifies uncertainty in the decision made by the classification model. Our generative model learns class-conditional representations of the internal features of the deep neural network. We demonstrate the effectiveness of our approach by converting publicly available flow-level network intrusion datasets into packet-level ones. We release the labeled packet-level versions of these datasets with over 50 million packets each and describe the challenges in creating these datasets. We withhold from the training data certain attack categories to simulate zero-day attacks. Existing deep learning models, which achieve an accuracy of over 99% when detecting known attacks, only correctly classify 1% of the novel attacks. Our proposed transformer architecture with normalizing flows model safeguard achieves an area under the receiver operating characteristic of over 0.97 in detecting these novel inputs, outperforming existing combinations of neural architectures and model safeguards.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Philip_K._Chan1
Submission Number: 3600
Loading