Work-in-Progress Paper: Creating and Validating the Conceptual Assessment for Sedimentology courses

Published: 01 Jan 2024, Last Modified: 18 Jul 2025FIE 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Contribution: In this work-in-progress paper we describe the process of creating and validating a conceptual assessment in the field of sedimentology for undergraduate geoscience courses. The mechanism can aid future geoscience educators and researchers in the process of academic assessment development aligned with learning objectives in these courses. Background: Prior literature review supports the benefits of using active learning tools in STEM (Science, Technology, Engineering, and Mathematics) courses. This paper is part of a larger project to develop and incorporate research-based active learning software in sedimentology and other geoscience courses to improve grade point average (GPA) and time to graduation for Hispanic students at Texas A&M University. To evaluate the novel tool, we designed and validated the conceptual assessment instrument presented in this work. Research Question: What is the process to develop and validate a conceptual assessment for sedimentology? Methodology: This paper follows quantitative analysis and the assessment triangle approach and focuses on cognition, observation, and interpretation to design and evaluate the conceptual assessment. In the cognition element of the triangle, we explain the mechanism for creating the assessment instrument using students' learning objectives. The observation element explains the mechanism of data collection and the instrument revision. The interpretation element explains the results of the validation process using item response theory and reliability measures. We collected the conceptual assessment data from 17 participants enrolled in two courses where sedimentology topics are taught. Participants were geology majors in one of the courses and engineering majors in the other. Findings: The team developed a conceptual assessment that included eight multiple-choice (MCQ) and four open-ended response questions. The results of the design process described the conceptualization of questions and their validation. Also, the validity of created rubrics was established using inter-rater reliability measures, which showed good agreement between raters. Additionally, the results of the validation process indicated that the conceptual assessment was designed for students with average abilities.
Loading