Abstract: In this paper, we argue that current AI (alignment) research operates on a spectrum between two different underlying conceptions of intelligence: Intelligence Realism, which holds that intelligence represents a single, universal capacity measurable across all systems, and Intelligence Pluralism, which views intelligence as diverse, context-dependent capacities that cannot be reduced to a single universal measure. Through an analysis of current debates in AI research, we demonstrate how the conceptions remain largely implicit yet fundamentally shape how empirical evidence gets interpreted across a wide range of areas. More significantly, the underlying views generate fundamentally different research strands across three areas. Methodologically, they produce different approaches to model selection, benchmark design, and experimental validation. Interpretively, they lead to contradictory readings of scaling laws and system limitations. Regarding AI risk, they generate categorically different assessments of risk and alignment approaches: the ones viewing superintelligence as the biggest risk and searching for unified alignment solutions, the others seeing different threats in many different domains and searching for context-specific solutions. We argue that making explicit these underlying assumptions can contribute to a clearer understanding of the disagreements in this research space and, potentially, a more context-sensitive approach to alignment research.
Loading