Abstract: The AI research community plays a vital role in shaping the scientific, engineering, and societal goals of AI research. In this position paper, we argue that focusing on the highly contested topic of 'artificial general intelligence' ('AGI') undermines our ability to choose effective goals. We identify six key traps---obstacles to productive goal setting---that are aggravated by AGI discourse: Illusion of Consensus, Supercharging Bad Science, Presuming Value-Neutrality, Goal Lottery, Generality Debt, and Normalized Exclusion. To avoid these traps, we argue that the AI research community needs to (1) prioritize specificity in scientific, engineering, and societal goals, (2) center pluralism about multiple worthwhile approaches to multiple valuable goals, and (3) foster innovation through greater inclusion of disciplines and communities. Therefore, the AI research community needs to stop treating ``AGI'' as the north-star goal of AI research.
Lay Summary: Debates about artificial general intelligence (AGI) miss the forest for the trees. What AGI means is highly contested. That makes debates slippery. What are we debating? What’s at stake? Are we having the right debate?
We argue the debate researchers should keep having is about the scientific, engineering, and societal goals of the field.
We examine six key traps that hinder setting worthwhile goals: Illusion of Consensus, Supercharging Bad Science, Presuming Value-Neutrality, Goal Lottery, Generality Debt, and Normalized Exclusion. We then elaborate how AGI narratives aggravate each.
We offer three recommendations for addressing these traps. Specificity: Prioritize specific language for scientific, engineering, and societal goals. Pluralism: Articulate many worthwhile scientific, engineering, and societal goals and many possible paths to fulfilling them. Inclusion: Greater inclusion of communities and disciplines in shaping the goals of AI research is beneficial to innovation.
Drawing on high-quality work by AGI proponents, we examine an alternative view we find especially strong: Can’t the obstacles be overcome through better definitions of AGI?
We provide three reasons to favor our position and reject this alternative. First, if specificity and pluralism are key solutions, we should avoid unifying north-star goals altogether. Second, the research community has a responsibility to help distinguish hype from reality. No matter how well or poorly defined, AGI has acquired a cultural significance that undermines this responsibility. Third, if the AI community still wants to work on consensus on a high-level goal, that goal should be benefiting and supporting people. Debating AGI keeps communities focused on trees instead of this worthwhile forest.
Primary Area: Research Priorities, Methodology, and Evaluation
Keywords: AGI, methodology, evaluation, generality, values
Submission Number: 393
Loading