Great Models Think Alike and this Undermines AI Oversight

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 spotlightposterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose a measure for model similarity, finding increasingly correlated failures as model capabilities improve, and show the negative effects of similarity on AI oversight paradigms like LLM-as-a-Judge and Weak-to-Strong Generalization.
Abstract: As Language Model (LM) capabilities advance, evaluating and supervising them at scale is getting harder for humans. There is hope that other language models can automate both these tasks, which we refer to as *AI Oversight*. We study how model similarity affects both aspects of AI oversight by proposing *Chance Adjusted Probabilistic Agreement (CAPA)*--a metric for LM similarity based on overlap in model mistakes. Using CAPA, we first show that *LLM-as-a-judge* scores favor models similar to the judge, generalizing recent self-preference results. Then, we study training on LM annotations, and find complementary knowledge between the weak supervisor and strong student model plays a crucial role in gains from *weak-to-strong generalization*. As model capabilities increase, it becomes harder to find their mistakes, and we might defer more to AI oversight. However, we observe a concerning trend--model mistakes are becoming more similar with increasing capabilities, pointing to risks from correlated failures. Our work underscores the importance of reporting and correcting for model similarity, especially in the emerging paradigm of AI oversight.
Lay Summary: Currently, there are hundreds of different language models (LM) available, as each tech company creates and releases their own chatbots. How different are these models really? Do all of them fail (or succeed) in the same ways? In this work, we measure model similarity based on how often they make the same mistakes. As LM capabilities advance, we find that model mistakes are becoming more similar, that is, Great Models Think Alike. At the same time, finding these mistakes and fixing them now needs more effort and expertise, making it expensive and time-consuming for humans. Recent research is trying to automate this process using another LM as a judge, or as a teacher, which we refer to as "AI Oversight". But could models thinking alike adversely affect AI oversight? Indeed, we find LM judgements show a bias, favouring more similar models. Moreover, when one LM is used as a 'teacher’ for another ‘student’ LM, we find lower performance improvements when models are more similar, perhaps because there is less complementary knowledge for the student to learn from. Overall, we show the importance of measuring model similarity, as it reveals insights beyond accuracy comparisons. To promote reporting of model similarity, we release a Python package lm-sim with many model similarity metrics, including ours.
Link To Code: https://github.com/model-similarity/lm-similarity
Primary Area: Deep Learning->Large Language Models
Keywords: Model Similarity, AI Oversight, LLM as a Judge, Weak to Strong Generalization, Model Differences
Submission Number: 7160
Loading