DeTinyLLM: Efficient detection of machine-generated text via compact paraphrase transformation

Published: 01 Jan 2026, Last Modified: 06 Nov 2025Inf. Fusion 2026EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Highlights•Compact T5 paraphrase replaces LLM queries; cuts latency 77.2 %, costs 95.7 % vs SimLLM.•Dual-text fusion (original + de-AI-ified) exposes AI artifacts via RoBERTa classification.•Achieves SOTA 0.932 ROC-AUC across 12 LLMs (e.g., GPT-4o, Gemini, LLaMA).•Bidirectional adversarial training reduces evasion by ≤38.2% vs paraphrasing attacks.
Loading