Abstract: In this work, we compile $\textbf{\texttt{DroidCollection}}$, the most extensive open data suite for training and evaluating machine-generated code detectors, comprising over a million code samples, seven programming languages, generations from 43 coding models, and over three real-world coding domains. Alongside fully AI-generated samples, our collection includes human-AI co-authored code, as well as adversarial samples explicitly crafted to evade detection. Subsequently, we develop $\textbf{\texttt{DroidDetect}}$, a suite of encoder-only detectors trained using a multi-task objective over $\texttt{DroidCollection}$. Our experiments show that existing detectors' performance fails to generalise to diverse coding domains and programming languages outside of their narrow training data. Additionally, we demonstrate that while most detectors are easily compromised by humanising the output distributions using superficial prompting and alignment approaches, this problem can be easily amended by training on a small amount of adversarial data. Finally, we demonstrate the effectiveness of metric learning and uncertainty-based resampling as means to enhance detector training on possibly noisy distributions.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: daaset, code, benchmarking, ai-generated content detection,
Contribution Types: Publicly available software and/or pre-trained models, Data resources
Languages Studied: Python, Java, JavaScript, C++, C, Go, C#
Submission Number: 1389
Loading