Counterfactual Evaluation for Blind Attack Detection in LLM-based Evaluation Systems

ACL ARR 2025 May Submission1718 Authors

18 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: This paper investigates defenses in LLM-based evaluation, where prompt injection attacks can manipulate scores by deceiving the evaluation system. We formalize blind attacks as a class in which candidate answers are crafted independently of the true answer. To counter such attacks, we propose an evaluation framework that combines standard and counterfactual evaluation. Experiments show it significantly improves attack detection with minimal performance trade-offs for recent models.
Paper Type: Short
Research Area: Resources and Evaluation
Research Area Keywords: benchmarking, automatic creation and evaluation of language resources, prompting, metrics, robustness, security and privacy, red teaming
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Keywords: benchmarking, automatic creation and evaluation of language resources, prompting, metrics, robustness, security and privacy, red teaming
Submission Number: 1718
Loading