Pay Attention to Real World Perturbations! Natural Robustness Evaluation in Machine Reading Comprehension
Abstract: As neural language models achieve human-comparable performance on Machine Reading Comprehension (MRC) and see widespread adoption, ensuring their robustness in real-world scenarios has become increasingly important. Current robustness evaluation research, though, primarily develops synthetic perturbation methods, leaving unclear how well they reflect real life scenarios. Considering this, we present a framework to automatically examine MRC models on naturally occurring textual perturbations, by replacing paragraph in MRC benchmarks with their counterparts based on available Wikipedia edit history. Such perturbation type is natural as its design does not stem from an arteficial generative process, inherently distinct from the previously investigated synthetic approaches. In a large-scale study encompassing SQUAD datasets and various model architectures we observe that natural perturbations result in performance degradation in pre-trained encoder language models. More worryingly, these state-of-the-art Flan-T5 and Large Language Models (LLMs) inherit these errors, with the largest observed drop reaching 28.28%. Further experiments demonstrate that our findings generalise to natural perturbations found in other more challenging MRC benchmarks such as DROP and HOTPOTQA. In an effort to mitigate these errors, we show that robustness to natural perturbations can be improved through adversarial training for encoder-only models or through in-context demonstrations of perturbed instances for LLMs, although a more generalisable and effective defence strategy remains to be developed.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: reading comprehension, generalization
Contribution Types: Data resources, Data analysis
Languages Studied: English
Submission Number: 6362
Loading