ExCyTIn-Bench: Evaluating LLM agents on Cyber Threat Investigation

18 Sept 2025 (modified: 26 Jan 2026)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM Agent, Benchmark, Cyber Security Investigation
TL;DR: We present ExCyTIn-Bench, the first benchmark to Evaluate an LLM agent on the task of Cyber Threat Investigation through security questions derived from investigation graphs.
Abstract: We present ExCyTIn-Bench, the first benchmark to Evaluate an LLM agent on the task of Cyber Threat Investigation through security questions derived from investigation graphs. Real‑world security analysts must sift through a large number of heterogeneous alert signals and security logs, follow multi‑hop chains of evidence, and compile an incident report. With the developments of LLMs, building LLM-based agents for automatic thread investigation is a promising direction. To assist the development of LLM agents, we construct a benchmark from a controlled Azure tenant including a SQL environment covering 57 log tables from Microsoft Sentinel and related services, and 589 automatically generated test questions. We leverage security logs extracted with expert-crafted detection logic to build threat investigation graphs, and then generate questions with LLMs using paired nodes on the graph, taking the start node as background context and the end node as answer. Anchoring each question to these explicit nodes and edges not only provides automatic, explainable ground truth answers but also makes the pipeline reusable and readily extensible to new logs. This also enables the automatic generation of procedural tasks with verifiable rewards, which can be naturally extended to training agents via reinforcement learning. Our comprehensive experiments with different models confirm the difficulty of the task: with the base setting, the average reward across all evaluated models is 0.249, and the best achieved is 0.368, leaving substantial headroom for future research.
Primary Area: datasets and benchmarks
Submission Number: 14100
Loading