VLA-Mark: A cross modal watermark for large vision-language alignment models

ACL ARR 2025 May Submission6221 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Vision-language models demand watermarking solutions that protect intellectual property without compromising multimodal coherence. Existing text watermarking methods disrupt visual-textual alignment through biased token selection and static strategies, leaving semanticcritical concepts vulnerable. We propose VLAMark, a vision-aligned framework that embeds detectable watermarks while preserving semantic fidelity through cross-modal coordination. Our approach integrates multiscale visualtextual alignment metrics, combining localized patch affinity, global semantic coherence, and contextual attention patterns, to guide watermark injection without model retraining. An entropy-sensitive mechanism dynamically balances watermark strength and semantic preservation, prioritizing visual grounding during low-uncertainty generation phases. Experiments show 7.4% lower PPL and 26.6% higher BLEU than conventional methods, with nearperfect detection (98.8% AUC). The framework demonstrates 96.1% attack resilience against attacks such as paraphrasing and synonym substitution, while maintaining text-visual consistency, establishing new standards for qualitypreserving multimodal watermarking.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: watermark, multi-modal model, large language model
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 6221
Loading