Can AI-Generated Text be Reliably Detected? Stress Testing AI Text Detectors Under Various Attacks

TMLR Paper3462 Authors

09 Oct 2024 (modified: 20 Oct 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) can perform impressively well in various applications, such as document completion and question-answering. However, the potential for misuse of these models in activities such as plagiarism, generating fake news, and spamming has raised concerns about their responsible use. Consequently, the reliable detection of AI-generated text has become a critical area of research. Recent works have attempted to address this challenge through various methods, including the identification of model signatures in generated text outputs and the application of watermarking techniques to detect AI-generated text. These detectors have shown to be effective under their specific settings. In this paper, we stress-test the robustness of these AI text detectors in the presence of an attacker. We introduce recursive paraphrasing attack to stress test a wide range of detection schemes, including the ones using the watermarking as well as neural network-based detectors, zero-shot classifiers, and retrieval-based detectors. Our experiments conducted on passages, each approximately 300 tokens long, reveal the varying sensitivities of these detectors to our attacks. We also observe that these paraphrasing attacks add slight degradation to the text quality. We analyze the trade-offs between our attack strength and the resulting text quality, measured through human studies, perplexity scores, and accuracy on text benchmarks. Our findings indicate that while our recursive paraphrasing method can significantly reduce detection rates, it only slightly degrades text quality in many cases, highlighting potential vulnerabilities in current detection systems in the presence of an attacker. Additionally, we investigate the susceptibility of watermarked LLMs to spoofing attacks aimed at misclassifying human-written text as AI-generated. We demonstrate that an attacker can infer hidden AI text signatures without white-box access to the detection method, potentially leading to reputational risks for LLM developers. Finally, we provide a theoretical framework connecting the AUROC of the best possible detector to the Total Variation distance between human and AI text distributions. This analysis offers insights into the fundamental challenges of reliable detection as language models continue to advance.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=1YYpg9tdVb
Changes Since Last Submission: As noted by the AE, "all reviewers appreciated the technical developments of the paper and think this could be an exciting paper that addresses a very important and timely challenge". However, a common comment was "to update the narrative of the paper to highlight the nuances of the technical results". Based on this comment by the AE and the reviewers, we made significant revisions to the draft, updating the narrative of the paper to highlight its technical nuances. To summarize our revised narrative, we modified the title of the paper to "Can AI-Generated Text be Reliably Detected? Stress Testing AI Text Detectors Under Various Attacks". This title is apt for the revised version of the paper since we focus on the reliability of detectors in the presence of various attacks. We present our attacks to evade text detectors and evaluate the tradeoff between attack strength and the resulting text quality after the attack. We have made changes to the entire manuscript to fit it into our revised narrative. These changes include: 1. Modification to the title -- focusing on stress-testing detectors in the presence of various attacks 2. Completely re-written abstract -- to incorporate our results following the modified title 3. Majorly rewritten introduction section -- following the theme of our new narrative 4. Modified rest of the sections -- to reflect the new narrative of the paper
Assigned Action Editor: ~Kamalika_Chaudhuri1
Submission Number: 3462
Loading