Keywords: AI in Cybersecurity (AICyber), Phishing Detection, Large Language Models, Zero-shot Learning, Few-shot Learning, Prompt Engineering
TL;DR: Zero-shot and few-shot learning with LLMs offers practical utility, operational efficiency, and scalability. We benchmarked three LLMs for phishing URL detection using zero-shot and few-shot prompting.
Abstract: The Uniform Resource Locator (URL), introduced in a connectivity-first era to define access and locate resources, remains historically limited, lacking future-proof mechanisms for security, trust, or resilience against fraud and abuse, despite the introduction of reactive protections like HTTPS during the cybersecurity era. In the current AI-first threatscape, deceptive URLs have reached unprecedented sophistication due to the widespread use of generative AI by cybercriminals and the AI-vs-AI arms race to produce context-aware phishing websites and URLs that are virtually indistinguishable to both users and traditional detection tools. Although AI-generated phishing accounted for a small fraction of filter-bypassing attacks in 2024, phishing volume has escalated over 4,000% since 2022, with nearly 50% more attacks evading detection. At the rate the threatscape is escalating, and phishing tactics are emerging faster than labeled data can be produced, zero-shot and few-shot learning with large language models (LLMs) offers a timely and adaptable solution, enabling generalization with minimal supervision. Given the critical importance of phishing URL detection in large-scale cybersecurity defense systems, we present a comprehensive benchmark of LLMs under a unified zero-shot and few-shot prompting framework and reveal operational trade-offs. Our evaluation uses a balanced dataset with consistent prompts, offering detailed analysis of performance, generalization, and model efficacy, quantified by accuracy, precision, recall, F1 score, AUROC, and AUPRC, to reflect both classification quality and practical utility in threat detection settings. We conclude few-shot prompting improves performance across multiple LLMs.
Submission Type: Benchmark Paper (4-9 Pages)
NeurIPS Resubmit Bundle: pdf
NeurIPS Resubmit Summary: Some NeurIPS reviewers raised concerns about novelty, but acknowledged that all technical questions were resolved and the work is clearly written and replicable. We emphasize that this is the first reproducible benchmark of frontier LLMs for phishing URL detection under deployment constraints such as heterogeneous schemas, latency budgets, and no weight access, conditions absent from prior benchmarks. The study contributes operationally relevant empirical evidence that complements rather than duplicates existing work.
NeurIPS Resubmit Attestation: I am an author of the referenced NeurIPS 2025 submission. I have the right to share the anonymous reviews/meta-review for the exclusive use of the workshop PCs/reviewers. I understand they will not be redistributed publicly.
Submission Number: 149
Loading