PromptScreen: Efficient Jailbreak Mitigation Using Semantic Linear Classification in a Multi-Staged Pipeline
Keywords: Prompt injection, jailbreak attacks, LLM security, semantic filtering, efficient defenses, adversarial prompting
Abstract: Prompt injection and jailbreaking attacks pose persistent security challenges to large language model (LLM)–based systems. We present \textbf{PromptScreen}, an efficient and systematically evaluated defense architecture that mitigates these threats through a lightweight, multi‑stage pipeline. Its core component is a semantic filter based on text normalization, TF–IDF representations, and a Linear SVM classifier. Despite its simplicity, this module achieves 93.4\% accuracy and 96.5\% specificity on held‑out data, substantially reducing attack throughput while incurring negligible computational overhead.
Building on this efficient foundation, the full pipeline integrates complementary detection and mitigation mechanisms that operate at successive stages, providing strong robustness with minimal latency. In comparative experiments, our SVM‑based configuration improves overall accuracy from 35.1 \% to 93.4\% while reducing average time‑to‑completion from $\approx$ 450s to 47s, yielding over 10× lower latency than ShieldGemma. These results demonstrate that the proposed design simultaneously advances defensive precision and efficiency, addressing a core limitation of current model‑based moderators.
Evaluation across a curated corpus of over 30{,}000 labeled prompts, including benign, jailbreak, and application‑layer injections, confirms that staged, resource‑efficient defenses can robustly secure modern LLM‑driven applications.
Paper Type: Long
Research Area: Safety and Alignment in LLMs
Research Area Keywords: AI / LLM Agents, Language Modeling, Interpretability and Analysis of Models for NLP, Semantics: Lexical and Sentence-Level,
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches low compute settings-efficiency, Publicly available software and/or pre-trained models, Data resources
Languages Studied: English
Submission Number: 9238
Loading