Keywords: Learning opportunities; Resource allocation; Explainable machine learning; STEM education; AI literacy
Abstract: Differences in educational opportunities and resource allocation across schools often result in disparate student outcomes and unequal access to computing and artificial intelligence (AI)-related learning opportunities. Existing research shows that school districts serving economically disadvantaged groups often have fewer resources per student, further exacerbating achievement gaps [1-2]. Addressing this issue requires a systematic analysis of the structural factors that influence student learning outcomes and their access to emerging technologies.
We design and develop an explainable machine learning framework to investigate the underlying mechanisms driving educational inequality. The research methods include logistic regression, decision trees, and random forests, covering traditional education datasets such as the Common Core Data (CCD) [3] and the Civil Rights Data Collection (CRDC) [4], as well as emerging computing and AI datasets such as EdNet [5], Scratch [6], and ActiveAI [7]. Unlike black-box models, interpretable methods can transparently reveal how student-teacher ratios, funding levels, technology accessibility, computing opportunities, and group differences such as race, gender, and socioeconomic background contribute to student learning outcomes.
We intend not only to identify these structural drivers but also to translate these findings into classroom modules to support K-12 educators in effectively integrating data science and AI literacy and training into their curricula. These modules will include hands-on data exploration using simplified versions of public datasets (e.g., CCD, CRDC, or EdNet subsets), guided coding activities in Python or Scratch to illustrate key AI concepts, classroom simulations of resource-allocation scenarios to make equity issues tangible, and structured group discussions that connect algorithmic fairness to students’ own experiences. By integrating computational methods with real-world educational challenges, these modules are expected to stimulate student interest and engagement in AI learning, foster critical thinking and a sense of belonging, and promote understanding of broader societal issues.
The explainable model provides evidence-based insights into how structural disparities in educational resources and opportunities exacerbate achievement gaps among diverse student groups. Furthermore, through an evidence-driven analytical framework, these modules will provide educators and policymakers with new tools and approaches to promote more equitable access to computing and AI educational opportunities. This direction resonates with the call for responsible educational AI and transparent application [8] and the America’s AI Action Plan [9].
Submission Number: 301
Loading