Keywords: Winograd Schema Challenges, common sense reasoning, benchmark memorization, new resource
TL;DR: We evaluate LLMs by comparing performance on WinoGrande with a new paraphrased version WinoWhat per common sense category, revealing a performance drop that is not primarily due to benchmark memorization.
Abstract: In this study, we take a closer look at how Winograd schema challenges can be used to evaluate common sense reasoning in LLMs. Specifically, we evaluate generative models of different sizes on the popular WinoGrande benchmark. We release WinoWhat, a new corpus, in which each instance of the WinoGrande validation set is paraphrased. Additionally, we evaluate the performance on the challenge across five common sense knowledge categories, giving more fine-grained insights on what types of knowledge are more challenging for LLMs. Surprisingly, all models perform significantly worse on WinoWhat, implying that LLM reasoning capabilities are overestimated on WinoGrande. To verify whether this is an effect of benchmark memorization, we match benchmark instances to LLM trainingdata and create two test-suites. We observe that memorization has a minimal effect on model performance on WinoGrande.
Supplementary Material: zip
Submission Number: 48
Loading