LLMs Show Surface-Form Brittleness Under Paraphrase Stress Tests

Published: 24 Sept 2025, Last Modified: 08 Oct 2025NeurIPS 2025 LLM Evaluation Workshop OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM evaluation, paraphrase stress test, surface-form brittleness, benchmark contamination, memorization, robustness, ARC (ARC-Easy, ARC-Challenge), instruction-tuned models, Mistral-7B-Instruct, Qwen2.5-7B-Instruct, deterministic decoding, reproducibility
TL;DR: Paraphrasing ARC questions causes consistent 6–10 point accuracy drops in 7B LLMs, revealing surface-form brittleness and raising contamination concerns.
Abstract: Benchmark scores for Large Language Models (LLMs) can be inflated by memorization of test items or near duplicates. We present a simple, protocol that probes generalization by re-evaluating models on paraphrased versions of benchmark questions. Using Mistral-7B-Instruct and Qwen2.5-7B-Instruct, we measure the accuracy gap between original and paraphrased items on ARC-Easy and ARC-Challenge. Our pipeline controls decoding, enforces multiple-choice output format, and includes a robust paraphrase-cleaning step to preserve semantics. We find that paraphrasing induces a non-trivial accuracy drop (original vs. paraphrased), consistent with prior concerns about contamination and brittle surface-form shortcuts.
Submission Number: 207
Loading