Pseudointelligence: A Unifying Lens on Language Model Evaluation

Published: 23 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 FindingsEveryoneRevisionsBibTeX
Submission Type: Regular Short Paper
Submission Track: Theme Track: Large Language Models and the Future of NLP
Submission Track 2: Interpretability, Interactivity, and Analysis of Models for NLP
Keywords: Evaluation, Large Language Models, Learning Theory, Computational Complexity
Abstract: With large language models surpassing human performance on an increasing number of benchmarks, we must take a principled approach for targeted evaluation of model capabilities. Inspired by pseudorandomness, we propose pseudointelligence, which captures the maxim that "(perceived) intelligence lies in the eye of the beholder." That is, that claims of intelligence are meaningful only when their evaluator is taken into account. Concretely, we propose a complexity-theoretic framework of model evaluation cast as a dynamic interaction between a model and a learned evaluator. We demonstrate that this framework can be used to reason about two case studies in language model evaluation, as well as analyze existing evaluation methods.
Submission Number: 4954
Loading