Abstract: With the rapid development of large language models (LLMs), how to efficiently evaluate them has become an important research question.
Existing evaluation methods often suffer from high costs, limited test formats, the needs of human references, and systematic evaluation biases.
To address these issues, our study introduces the Auto-PRE, an automatic LLM evaluation framework based on peer review.
In contrast to previous studies that rely on human annotations, Auto-PRE selects evaluator LLMs automatically based on their inherent traits including consistency, self-confidence, and pertinence. We have conducted extensive experiments on both summary generation and non-factoid question-answering tasks. Results indicate our Auto-PRE achieves state-of-the-art performance at a lower cost. Moreover, our study highlights the impact of prompt strategies and evaluation formats on evaluation performance, offering guidance for method optimization in the future.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: Evaluation, Large Language Model
Contribution Types: Model analysis & interpretability, Publicly available software and/or pre-trained models, Data analysis
Languages Studied: English
Submission Number: 476
Loading