Rigor in AI: Doing Rigorous AI Work Requires a Broader, Responsible AI-Informed Conception of Rigor

Published: 26 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 Position Paper TrackEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Rigor in AI, Responsible AI, methodological rigor, epistemic rigor, normative rigor, conceptual rigor, interpretative rigor, and reporting rigor
TL;DR: Rigor in AI remains largely understood in terms of methodological rigor. This common, yet narrow conceptualization of rigor has contributed to many responsible AI concerns. Here, we argue for a broader view of what rigorous AI research should entail.
Abstract: In AI research and practice, rigor remains largely understood in terms of methodological rigor---such as whether mathematical, statistical, or computational methods are correctly applied. We argue that this narrow conception of rigor has contributed to the concerns raised by the responsible AI community, including overblown claims about the capabilities of AI systems. Our position is that a broader conception of what rigorous AI research and practice should entail is needed. We believe such a conception---in addition to a more expansive understanding of 1) methodological rigor---should include aspects related to 2) what background knowledge informs what to work on (epistemic rigor); 3) how disciplinary, community, or personal norms, standards, or beliefs influence the work (normative rigor); 4) how clearly articulated the theoretical constructs under use are (conceptual rigor); 5) what is reported and how (reporting rigor); and 6) how well-supported the inferences from existing evidence are (interpretative rigor). In doing so, we also provide useful language and a framework for much needed dialogue about the AI community's work by researchers, policymakers, journalists, and other stakeholders.
Lay Summary: Impoverished notions of what rigorous AI work should entail may not only lead to some one-off undesirable outcomes but can have a deeply formative impact on the scientific integrity and quality of both AI research and practice. In AI research and practice rigor still remains largely understood in terms of methodological rigor such as whether mathematical, statistical, or computational methods are correctly applied. This narrow understanding of rigor has contributed to many of the concerns raised by the responsible AI community. Indeed, part of what the responsible AI community asks from AI researchers and practitioners is to uphold principles of scientific integrity in their work. This, we argue, also requires broadening our conception of what rigorous AI work entails beyond methodological concerns in order to also include aspects related to notions of epistemic (what background knowledge does the work rely on), normative (how norms influence the work), conceptual (what are the theoretical constructs under use), reporting (what about the work is being reported), and interpretative (are the inferences from existing evidence sound) rigor. Limiting our conception of rigor to methodological concerns can thus obfuscate how our work and the claims we make are shaped by a variety of choices that both precede and succeed any methodological considerations.
Submission Number: 591
Loading