Keywords: Prompt Injections, Malicious Prompts, Large Language Models
TL;DR: Our paper introduces ZEDD (Zero Shot Embedding Drift Detection), a lightweight, model agnostic framework that detects prompt injection attacks in LLMs by measuring drift within a semantic embedding space.
Abstract: Prompt injection attacks have become an increasing vulnerability for Large Language Model (LLM) applications, where adversarial prompts exploit indirect input channels such as emails or user-generated content to circumvent alignment safeguards and induce harmful or unintended outputs. Despite advances in alignment, even state-of-the-art LLMs remain broadly vulnerable to sophisticated adversarial prompts, underscoring the urgent need for robust, productive, and generalizable detection mechanisms beyond inefficient, model-specific patches. In this work, we propose Zero-Shot Embedding Drift Detection (ZEDD), a lightweight, low-engineering-overhead framework that identifies both direct and indirect prompt injection attempts by quantifying semantic shifts in embedding space between benign and suspect inputs. ZEDD operates without requiring access to model internals, prior knowledge of attack types, or task-specific retraining, enabling efficient zero-shot deployment across diverse LLM architectures. Our method leverages aligned adversarial-clean prompt pairs and measures embedding drift via cosine similarity, abstracting away surface-level perturbations to capture subtle adversarial manipulations inherent to real-world injection attacks. To ensure robust evaluation, we assemble and re-annotate the comprehensive LLMail-Inject dataset spanning five injection categories derived from publicly available sources. Extensive experiments demonstrate that embedding drift is a robust and transferable signal, outperforming traditional regex-based and supervised methods in both detection accuracy and operational efficiency. With greater than 93% accuracy in classifying prompt injections across model architectures like Llama 3, Qwen 2, and Mistral with a false positive rate of <3%, our approach offers a lightweight, scalable defense layer that integrates into existing LLM pipelines, addressing a critical gap in securing LLM-powered systems to withstand progressively adaptive adversarial threats. All code utilized in this project is disclosed at https://github.com/AnirudhSekar/ZEDD/blob/main/Zero_Shot_Embedding_Drift_Detection_A_Lightweight_Defense_Against_Prompt_Injections_in_LLMs.ipynb
Submission Number: 64
Loading