Abstract: Authorship attribution has become increasingly accurate, posing a serious privacy risk for programmers who wish to remain anonymous. In this article, we introduce SHIELD to examine the robustness of different code authorship attribution approaches against adversarial code examples. We define four attacks on attribution techniques, which include targeted and non-targeted attacks, and realize them using adversarial code perturbation. We experimented with a dataset of 200 programmers from the Google Code Jam competition to validate our methods. We target six state-of-the-art authorship attribution methods that adopt various techniques for extracting authorship traits from source code, including RNN, CNN, and code stylometry. Our experiments demonstrate the vulnerability of current authorship attribution methods against adversarial attacks. For the non-targeted attack, our experiments demonstrate the vulnerability of current authorship attribution methods against the attack with an attack success rate exceeding 98. 5% accompanied by a degradation of the identification confidence exceeding 13%. For targeted attacks, we show the possibility of impersonating a programmer using targeted adversarial perturbations with a success rate ranging from 66% to 88% for different authorship attribution techniques under several adversarial scenarios.
External IDs:dblp:journals/tdsc/AbuhamadJMN25
Loading