LLMs as Educational Analysts: Transforming Multimodal Data Traces Into Actionable Reading Assessment Reports
Abstract: Reading assessments are essential for enhancing students’ comprehension; yet, many EdTech applications focus mainly on outcome-based metrics, providing limited insights into students’ reading behaviors and cognition. This study investigates the use of multimodal data that includes eye-tracking data, along with learning outcomes, assessment content, and teaching standards to derive meaningful reading insights. We employ unsupervised learning techniques to identify distinct reading behavior patterns. A large language model (LLM) then synthesizes the derived information into actionable reports for educators, streamlining the interpretation process. LLM experts and human educators evaluated these reports for clarity, accuracy, relevance, and pedagogical usefulness. Our findings indicate that LLMs can effectively function as educational analysts, turning diverse data into teacher-friendly insights that educators find beneficial. While automated insight generation shows promise, human oversight remains crucial to ensure reliability and fairness. This research advances human-centered AI in education, connecting data-driven analytics with practical classroom applications.
External IDs:dblp:conf/aied/DavalosZSSMCBG25
Loading