Keywords: Benchmark, Security Vunerability, Large Language Model
Abstract: Large Language Models (LLMs) have shown promise in various software engineering tasks, but evaluating their effectiveness in vulnerability detection remains challenging due to the lack of high-quality benchmark datasets. Most existing datasets are limited to function-level labels, ignoring finer-grained vulnerability patterns and crucial contextual information. They also often suffer from poor data quality, such as mislabeling, inconsistent annotations, and duplicates, which can lead to inflated performance and weak generalization. Moreover, by including only the vulnerable functions, these datasets miss broader program context, like data/control dependencies and interprocedural interactions, that are essential for accurately detecting and understanding real-world security flaws. Without this context, detection models are evaluated under unrealistic assumptions, limiting their practical impact. To address these limitations, this paper introduces SECVULEVAL, a comprehensive benchmark designed to support fine-grained evaluation of LLMs and other detection methods with rich contextual information. SECVULEVAL focuses on real-world C/C++ vulnerabilities at the statement level. This granularity enables more precise evaluation of a model’s ability to localize and understand vulnerabilities, beyond simple binary classification at the function level. By incorporating rich contextual information, SECVULEVAL sets a new standard for benchmarking vulnerability detection in realistic software development scenarios. This benchmark includes 25,440 function samples covering 5,867 unique CVEs in C/C++ projects from 1999 to 2024. We evaluated the SOTA LLMs with a multi-agent-based approach. The evaluation on our dataset shows that the models are still far from accurately predicting vulnerable statements in a given function. The best-performing Claude-3.7-Sonnet model achieves a 23.83% F1-score for detecting vulnerable statements with correct reasoning, with GPT-4.1 closely behind. We also evaluate the effect of using contextual information for the vulnerability detection task. Finally, we analyze the LLM outputs and provide insights into their behavior in vulnerability detection for C/C++.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 14192
Loading