Abstract: The increasing adoption of LLMs for code-related tasks has raised concerns about the security of their training datasets. One critical threat is dead code poisoning, where syntactically valid but functionally redundant code is injected into training data to manipulate model behavior. Such attacks can degrade the performance of neural code search systems, leading to biased or insecure code suggestions. Existing detection methods, such as token-level perplexity analysis, fail to effectively identify dead code due to the structural and contextual characteristics of programming languages. In this paper, we propose DePA (Dead Code Perplexity Analysis), a novel line-level detection and cleansing method tailored to the structural properties of code. DePA computes line-level perplexity by leveraging the contextual relationships between code lines and identifies anomalous lines by comparing their perplexity to the overall distribution within the file. Our experiments on benchmark datasets demonstrate that DePA significantly outperforms existing methods, achieving 0.24-0.32 improvement in detection F1-score and a 0.54-0.77 increase in poisoned segment localization precision.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: code generation and understanding; fact checking, rumor/misinformation detection
Contribution Types: Data analysis
Languages Studied: English
Keywords: Code LLMs, Data Poisoning, Dead Code, Perplexity Analysis, Backdoor Detection
Submission Number: 306
Loading