Research Area: Evaluation
Keywords: self-challenging, human-in-the-loop, evaluation
TL;DR: We introduce a new evaluation method combining LLMs and human insight to identify and analyze the persistent 'bug' patterns in LLMs.
Abstract: The impressive performance of Large Language Models (LLMs) has consistently surpassed numerous human-designed benchmarks, presenting new challenges in assessing the shortcomings of LLMs.
Designing tasks and finding LLMs' limitations are becoming increasingly important.
In this paper, we investigate the question of whether an LLM can discover its own limitations from the errors it makes.
To this end, we propose a Self-Challenge evaluation framework with human-in-the-loop.
Starting from seed instances that GPT-4 fails to answer, we prompt GPT-4 to summarize error patterns that can be used to generate new instances and incorporate human feedback on them to refine these patterns for generating more challenging data, iteratively.
We end up with 8 diverse patterns, such as text manipulation and questions with assumptions.
We then build a benchmark, SC-G4, consisting of 1,835 instances generated by GPT-4 using these patterns, with human-annotated gold responses.
The SC-G4 serves as a challenging benchmark that allows for a detailed assessment of LLMs' abilities.
Our results show that only 44.96\% of instances in SC-G4 can be answered correctly by GPT-4.
Interestingly, our pilot study indicates that these error patterns also challenge other LLMs, such as Claude-3 and Llama-3, and cannot be fully resolved through fine-tuning. Our work takes the first step to demonstrate that LLMs can autonomously identify their inherent flaws and provide insights for future dynamic and automatic evaluation.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
Author Guide: I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
Submission Number: 947
Loading