Large Language Models Are Effective Code Watermarkers

ACL ARR 2026 January Submission1052 Authors

27 Dec 2025 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Source Code Watermarking, Large Language Models, Source Attribution
Abstract: The widespread use of large language models (LLMs) and open-source code has raised ethical and security concerns regarding the distribution and attribution of source code, including unauthorized redistribution, license violations, and misuse of code for malicious purposes. Watermarking has emerged as a promising solution for source attribution, but existing techniques rely heavily on hand-crafted transformation rules, abstract syntax tree (AST) manipulation, or task-specific training, limiting their scalability and generality across languages. Moreover, their robustness against attacks remains limited. To address these limitations, we propose \textbf{CodeMark-LLM}, an LLM-driven watermarking framework that embeds watermark into source code without compromising its semantics or readability. CodeMark-LLM consists of two core components: (i) \textit{Semantically Consistent Embedding} module that applies functionality-preserving transformations to encode watermark bits, and (ii) \textit{Differential Comparison Extraction} module that identifies the applied transformations by comparing the original and watermarked code. Leveraging the cross-lingual generalization ability of LLM, CodeMark-LLM avoids language-specific engineering and training pipelines. Extensive experiments across diverse programming languages and attack scenarios demonstrate its robustness, effectiveness, and scalability.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: code generation and understanding, security/privacy, NLP for social good
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 1052
Loading