Abstract: Highlights•Compact T5 paraphrase replaces LLM queries; cuts latency 77.2 %, costs 95.7 % vs SimLLM.•Dual-text fusion (original + de-AI-ified) exposes AI artifacts via RoBERTa classification.•Achieves SOTA 0.932 ROC-AUC across 12 LLMs (e.g., GPT-4o, Gemini, LLaMA).•Bidirectional adversarial training reduces evasion by ≤38.2% vs paraphrasing attacks.
External IDs:dblp:journals/inffus/TanZLWCG26
Loading