Open Schrödinger's Closed Box: Identifying Retrieval Augmented Generation in API-Accessible Large Language Model Services
Keywords: retrieval-augmented generation, security/privacy
Abstract: Large language models (LLMs) are powerful at question-answering but prone to hallucinations due to limited domain-specific or up-to-date knowledge.
Retrieval augmented generation (RAG) mitigates this by adding an external retriever and knowledge database, yet RAG remains vulnerable to targeted attacks that degrade outputs or manipulate opinions.
Prior attacks typically assume adversaries know the service is RAG-enhanced and may even know deployment details, an assumption often invalid for real-world commercial LLMs that expose only black-box APIs.
This opacity also risks misleading users about system capabilities.
This work aims to bridge this gap by proposing $\texttt{RAG-ID}$, a framework for $\underline{\text{ID}}$entifying $\underline{\text{RAG}}$ properties in LLM services.
We classify adversaries into three knowledge levels and design six attack methods.
Experiments show these attacks reliably detect RAG — up to 99.97\% accuracy with partial or no optional knowledge, and nearly 100\% when the LLM and database are known.
After detection, $\texttt{RAG-ID}$ can infer finer RAG properties (e.g., deployed LLM and knowledge database).
We consider $\texttt{RAG-ID}$ a reconnaissance tool for attackers, a way to facilitate users' transparent selection of LLM services, and a guide for RAG developers in refining security measures.
Paper Type: Long
Research Area: Information Extraction and Retrieval
Research Area Keywords: retrieval-augmented generation, security/privacy
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 2039
Loading