Keywords: LLM, memorization, logical reasoning, perturbation, knight and knave
TL;DR: We propose memorization metric in reasoning tasks inspired by human behaviors, and a dynamically generated logical reasoning benchmark to study the interplay between memorization and genuine reasoning abilities of LLMs.
Abstract: Large language models (LLMs) show good performance on some complicated reasoning tasks, yet could also make the most basic reasoning mistakes. This contrasting behavior is puzzling when it comes to understanding the mechanisms behind LLMs' reasoning capabilities. One hypothesis is that the increasingly high and nearly saturated performance on common reasoning benchmarks could be due to the memorization of similar benchmark problems accidentally leaked into the training data.
In this paper, we systematically investigate this problem with a measurement of memorization in reasoning tasks inspired by human behaviors, and a dynamically generated logical reasoning benchmark based on Knights and Knaves puzzles. We found that LLMs could interpolate the training puzzles (achieving $\sim100$% accuracy) after fine-tuning, yet fail when those puzzles are slightly perturbed, suggesting that the models heavily rely on memorization to solve those training puzzles.
On the other hand, we show that LLMs learn to reason while interpolating the training set.
At higher level of memorization, the model not only solves more unseen test puzzles, but also solves them relatively robustly under perturbation.
This phenomenon suggests that LLMs exhibit a complex interplay between memorization and genuine reasoning abilities, and reveals an interesting direction for future research. Our code and data are available at https://memkklogic.github.io/.
Submission Number: 67
Loading