Abstract: How to better evaluate the capabilities of Large Language Models (LLMs) is the focal point and hot topic in current LLMs research.Previous work has noted that due to the extremely high cost of iterative updates of LLMs, they are often unable to answer the latest dynamic questions well. To promote the improvement of Chinese LLMs' ability to answer dynamic questions, in this paper, we introduce $\textbf{CDQA}$, a $\textbf{C}$hinese $\textbf{D}$ynamic $\textbf{QA}$ benchmark containing question-answer pairs related to the latest news on the Chinese Internet. We obtain high-quality data through a pipeline that combines humans and models, and carefully classify the samples according to the frequency of answer changes to facilitate a more fine-grained observation of LLMs' capabilities. We have also evaluated and analyzed mainstream and advanced Chinese LLMs on $\textit{CDQA}$. Extensive experiments and valuable insights suggest that our proposed $\textit{CDQA}$ is challenging and worthy of more further study. We believe that the benchmark we provide will become the key data resource for improving LLMs' Chinese question-answering ability in the future.
Paper Type: long
Research Area: Resources and Evaluation
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data resources, Data analysis
Languages Studied: Chinese
0 Replies
Loading