Keywords: intrusion detection, LLM agent, internet of things, LLM
TL;DR: We propose IDS-Agent, the first IDS based on an AI agent powered by large language models, featured by capabilities for results explanation, customization, and adaptation to zero-day attacks
Abstract: Emerging threats to IoT networks have accelerated the development of intrusion
detection systems (IDSs), characterized by a shift from traditional approaches
based on attack signatures or anomaly detection to approaches based on machine
learning (ML). However, current ML-based IDSs often lack result explanations
and struggle to address zero-day attacks due to their fixed output label space. In
this paper, we propose IDS-Agent, the first IDS based on an AI agent powered
by large language models (LLMs). For each input network traffic and a detection
request from the user, IDS-Agent predicts whether the traffic is benign or being
attacked, with an explanation of the prediction results. The workflow of IDS-Agent
involves iterative reasoning by a core LLM over the observation and action gen-
eration informed by the reasoning and retrieved knowledge. The action space of
IDS-Agent includes data extraction and preprocessing, classification, knowledge
retrieval, and results aggregation – these actions will be executed using abundant
tools, mostly specialized for IDS. Furthermore, the IDS-Agent is equipped with
a memory and knowledge base that retains information from current and pre-
vious sessions, along with IDS-related documents, enhancing its reasoning and
action generation capabilities. The system prompts of IDS-Agent can be easily
customized to adjust detection sensitivity or identify previously unknown types
of attacks. In our experiments, we demonstrate the strong detection capabilities
of IDS-Agent compared with ML-based IDSs and an IDS based on LLM with
prompt engineering. IDS-Agent outperforms these SOTA baselines on the ACI-IoT
and CIC-IoT benchmarks, with 0.97 and 0.75 detection F1 scores, respectively.
IDS-Agent also achieves a recall of 0.61 in detecting zero-day attacks.
Supplementary Material: pdf
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9846
Loading